Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: igl@ecs.soton.ac.uk (Ian Glendinning) Subject: Re: MPI(message passing interface)standards Organization: Electronics and Computer Science, University of Southampton References: <1993Sep29.124012.6790@hubcap.clemson.edu> In <1993Sep29.124012.6790@hubcap.clemson.edu> chu@feldspar.egr.msu.edu (Yung-Kang Chu) writes: >I would like to request some information (ftp address) about >the new MPI standard. If you know anything about MPI forum, >please send e-mail to me. This may be of interest to more people, so I'll post it here. Information about the standard can be obtained by e-mail from netlib. You can find out what files are available by sending a message to netlib@ornl.gov with a blank subject line, and containing the single line: send index from mpi It also tells you how to fetch the files, which include a recent draft of the standard. There will be a tutorial on MPI at the Supercomputing '93 conference in Portland, Oregon, in November. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ahearn@fys.ruu.nl (Tony Hearn) Subject: POSTDOC POSITION, THE NETHERLANDS Keywords: magnetohydrodynamics, parallel computing Organization: Physics Department, University of Utrecht, The Netherlands Date: Wed, 29 Sep 1993 15:24:39 GMT PLEASE POST POSTDOCTORAL POSITION ASTRONOMICAL INSTITUTE, UTRECHT UNIVERSITY, THE NETHERLANDS PARALLEL COMPUTATION OF MAGNETOHYDRODYNAMICS IN THERMONUCLEAR AND ASTROPHYSICAL PLASMAS The Astronomical Institute in the Department of Physics and Astronomy at Utrecht University in the Netherlands has a postdoctoral position available for two years for research on parallel computing applied to time dependent magnetohydrodynamics of astrophysical and thermonuclear plasmas. The successful candidate will work in a group of four postdocs. Two are working on time dependent magnetohydrodynamics calculations applied to astrophysics and plasma physics and two will work on parallel computing applied to time dependent magnetohydrodynamics. The group is divided between the Astronomical Institute, Utrecht University and the FOM Institute for Plasma Physics located just outside Utrecht. This work is a cooperation between the two Institutes and is under the direction of Professor J.P. Goedbloed, Professor A.G. Hearn and Professor M. Kuperus. The research into parallel computing is in cooperation with Professor H.A. van der Vorst, Mathematics Institute, Utrecht University. At present there is a Parsytec GC 512 and an IBM SP1 at Amsterdam, and a Thinking Machines CM 5 at Groningen. These computers are accessible through the computer network. The Utrecht University Computer Centre has a Meiko MK200. Applications are invited from astrophysicists, physicists and computational scientists who have or will shortly obtain a Ph.D. Experience with the numerical methods of time dependent (magneto)hydrodynamics and/or parallel computing is an advantage. The salary will be according to age and experience from Hfl 4100 up to a maximum of Hfl 4600 gross per month ( US $1 ~ Hfl 1.80 ). The starting date for the appointment is flexible. Further information may be obtained from Professor A.G. Hearn, preferably by email. Applications should reach Professor A.G. Hearn before 31st October 1993. They should contain a curriculum vitae, list of publications, and a short description of research interests, together with the names and addresses (with email addresses if possible) of three persons who may be asked to write a reference on the suitability of the applicant for the position. Applications may be submitted by email, fax or letter to :- Professor A. G. Hearn Sterrekundig Instituut Postbus 80000 3508 TA Utrecht The Netherlands Email ahearn@fys.ruu.nl 30453::27752::ahearn ahearn@solar.bitnet Fax intl +31 30 535201 Tel. intl +31 30 535202 -- A. G. Hearn Postbus 80000 3508 TA Utrecht The Netherlands Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.ai.neural-nets,comp.parallel From: anderson@CS.ColoState.EDU (Chuck Anderson) Subject: Looking for evaluations of the CNAPS architecture Sender: news@yuma.ACNS.ColoState.EDU (News Account) Message-ID: Nntp-Posting-Host: copland.cs.colostate.edu Organization: Colorado State University, Computer Science Department I would like to know how the performance of neural network computations on Adaptive Solutions' CNAPS architecture compares to that of other SIMD architectures. Also, any comments about development effort required for experimenting with non-standard neural network algorithms using the CNAPS would be helpful. If sufficient interest, I'll summarize the responses and post them to the net. -- Chuck Anderson assistant professor Department of Computer Science anderson@cs.colostate.edu Colorado State University 303-491-7491 Fort Collins, CO 80523 FAX: 303-491-6639 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kavaklis@ics.forth.gr (Yannis &) Subject: ICS-FORTH Technical Reports Organization: Institute of Computer Science, FORTH Hellas Summary: Recent papers and technical reports from ICS,FORTH via anonymous FTP. Keywords: Technical Reports, ICS-FORTH, Greece TECHNICAL REPORTS ANNOUNCEMENT Institute of Computer Science FORTH, Greece This note serves to announce the availability of recent papers and technical reports from ICS, FORTH via anonymous FTP. Currently (or soon to become) available are several papers on parallel systems software, architecture, information systems and software angineering, networks, vision, etc. Bibliographic citations and abstracts for the current papers appear in the ABSTRACTS file of the ftp site. New papers will be added from time to time. Each citation includes (at the end of the abstact) the name of the compressed postscript source file. You can retrieve them via anonymous ftp from ariadne.ics.forth.gr. i.e.: ftp ariadne.ics.forth.gr login 'anonymous' password your_email_address cd tech-reports For more information contact Maria Prevelianaki (mariab@ics.forth.gr). _____________________________________________________________________________ A brief profile of ICS-FORTH follows: _____________________________________________________________________________ Institute of Computer Science Foundation for Research and Technology - Hellas Heraklion, Crete, Greece e-mail: user@ics.forth.gr HISTORY ------- The Institute of Computer Science (ICS) is one of the first three research institutes founded in 1983 as part of the Research Center of Crete (RCC). ICS is now one of seven research institutes constituting the Foundation for Research and Technology - Hellas (FORTH), which is a center for research and development monitored by the Ministry of Industry, Energy and Technology (General Secretariat of Research and Technology) of the Greek Government. FORTH was formed in November 1987, when the Research Center of Crete (founded in 1983) was merged with Institutes from Patras and Thessaloniki. Today, FORTH is the second largest center in Greece comprising seven Institutes : Computer Science, Electronic Structure and Lasers, Molecular Biology and Biotechnology, Applied and Computational Mathematics, Chemical Process Engineering, Chemical Engineering \& High Temperature Chemical Processes, and Mediterranean Studies. ICS-FORTH has established itself as an internationally known and highly competitive research institute of computer science, with a modern infrastructure and a broad range of R&D and educational activities. ACTIVITIES ---------- Current R&D activities focus on the following areas: - Information Systems - Software Engineering - Parallel Architectures and Distributed Systems - Computer Vision and Robotics - Parallel Implementation of Integrated Vision Tasks - Digital Communications - Network Management - Machine Learning - Decision Support Systems - Formal Methods in Concurrent Systems - Computer Architectures and VLSI Design - Computer Aided Design - Medical Informatics - Rehabilitation Tele-Informatics The names of contact persons for each of the above activities can be obtained by sending e-mail to mariab@ics.forth.gr ICS employs a full-time scientific and technical staff of 40, mostly holding post-graduate degrees, and offers scholarships to over 120 graduate and undergraduate students, who participate in R&D activities. Furthermore, ICS employs 12 faculty members of the Department of Computer Science, University of Crete, 7 faculty members of the Technical University of Crete, and several faculty members of other Universities in Greece and abroad. ICS has been very active in European competitive R&D programmes and currently participates in a number of projects in the ESPRIT, RACE, AIM, and TIDE programmes, the Mediterranean Integrated Programme for Informatics, the NATO Science for Stability Programme, STRIDE, STAR, etc. External funding accounts for approximately 65% of ICS's budget. Several well equipped laboratories have been created, where powerful workstations are connected through local area networks with very large file servers and are gatewayed to international networks. ICS is the official Greek node for BITNET (EARN) and INTERNET (ITEnet, EUnet), providing an international electronic networking service to the Greek research community at large. ITEnet in particular, which is FORTH's integrated network providing very high speed local area connectivity (FDDI), is expanding rapidly and is expected to cover soon all major Greek cities, providing fast access (64 kbps) to advanced services. ICS-FORTH represents Greece in the European Research Consortium for Informatics and Mathematics (ERCIM), an organization dedicated to the advancement of European research and development, in the areas of information technology and applied mathematics. Ten European countries are currently participating in ERCIM. ICS has developed cooperation with other universities, research centers, and companies, as well as with Greek scientists living outside Greece, thus establishing a continuous exchange of scientific ideas and technology transfer. In addition, ICS has adopted a strategy of promoting the commercial exploitation of R&D results, by: - providing services (e.g. consulting, performing studies, etc); - contracting with industrial partners for specific products; - participating in startup companies and joint ventures. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tanner@nas.nasa.gov (Leigh Ann Tanner) Subject: Intel Supercomputer User's Group Newsletter Available Organization: NAS/NASA-Ames Research Center The first electronic edition of the Intel Supercomputer Users' Group newsletter is now available via ftp. This issue includes: * Late-breaking news: Oak Ridge National Laboratory accepts delivery of one of the largest Paragon(tm) supercomputers installed to date -- a 512- node configuration with peak performance of 38 GFLOPS. * An article from Users' Group chair Gary Lamont (Air Force Institute of Technology) on what's in store at the upcoming Supercomputer Users' Group Annual Conference, which takes place Oct. 3-6 in St. Louis. * A report from Marsha Jovanovic (San Diego Supercomputer Center) on the Paragon supercomputer and SDSC's recent summer institute. * A wrap-up from Thierry Priol (IRISA) on the Intel Supercomputer European Users' Group Annual Conference, held last June in Munich. * An in-depth interview with Oregon State University professor and HPC user advocate Dr. Cherri Pancake on Intel's new ParAide software development environment, which will be available this fall on the Paragon supercomputer. * A write-up on this summer's SuperQuest Northwest, where winning teams of high school students had a crack at parallel programming on an iPSC(R)/860 computer. * An Intel News Roundup -- all the news that fits, including new Paragon supercomputer configurations, a C++ CRADA with Sandia National Laboratories, an alliance with Unisys to develop scalable computers based on the Pentium(tm) processor and more. The ISUG newsletter is available via anonymous ftp at export.ssd.intel.com (137.102.222.129). The files are in plain ascii format, in the /pub/isug/ directory. The first issue is called "isug_newsletter.09.93", subsequent issues will be named to reflect the month and year of publication. Happy reading! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: blight@eeserv.ee.umanitoba.ca (David C Blight) Subject: Message-Passing simulators Sender: news@ccu.umanitoba.ca Nntp-Posting-Host: tequila.ee.umanitoba.ca Organization: Electrical Engineering, U of Manitoba, Winnipeg, Manitoba, Canada Date: Wed, 29 Sep 1993 21:33:33 GMT I am curious if anyone knows of any freely available message-passing simulators. I am not looking for anything specific as i am just curious about what is available if anything. I don't know if there is any interest in this sort of software. I have done what i suspect most people do, wrote my own simulator for the algorithms I am interested in. I am hoping that maybe there is some simulators available that will do common routing algorithms (worm-hole and that sort of stuff) Dave Blight blight@ee.umanitoba.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Crispin Cowan Subject: Re: HERMES Organization: Department of Computer Science, University of Western Ontario, London Message-ID: <9309292223.AA07675@theodore.csd.uwo.ca> In article <1993Sep29.123855.6202@hubcap.clemson.edu> you write: >Does anyone have any info., such as Technical Report refs., on the >Hermes language designed for programming large, concurrent and >distributed applications? Sure: @book ( her91, author = "Robert E. Strom and David F. Bacon and Arthur Goldberg and Andy Lowry and Daniel Yellin and Shaula Alexander Yemini", title = "{Hermes: A Language for Distributed Computing}", publisher = "Prentice Hall", year = 1991 ) @article ( str86, author = "Robert E. Strom and Shaula Alexander Yemini", title = "{Typestate: A Programming Language Concept for Enhancing Software Reliability}", journal = "IEEE Transactions on Software Engineering", volume = 12, number = 1, month = "January", year = 1986, pages = "157-171" ) @inproceedings{ str90, author="Robert E. Strom", title="{Hermes: An Integrated Language and System for Distributed Programming}", booktitle="1990 Workshop on Experimental Distributed Systems", location = "Huntsville, AL", year=1990, month="April" } @inproceedings{ bac90h, author="David F. Bacon and Robert E. Strom", title="{A PORTABLE RUN_TIME System for the Hermes Distributed Programming Language}", booktitle="Summer 1990 USENIX Conference", location = "Anaheim, CA" year=1990, month="June" } Crispin ----- Crispin Cowan, CS grad student, University of Western Ontario Phyz-mail: Middlesex College, MC28-C, London, Ontario, N6A 5B7 E-mail: crispin@csd.uwo.ca Voice: 519-661-3342 "If you see a skier, turn. Trees too." Burton rendition of the Skier's^H^H^H^H^H^H^H Snowboarder's Responsibility Code Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: achilles@ira.uka.de (Alf-Christian Achilles) Subject: Re: Papers for parallel data compression wanted Organization: Universitaet Karlsruhe, Karlsruhe, Deutschland. References: <1993Sep29.123748.5659@hubcap.clemson.edu> Nntp-Posting-Host: i90fs2.ira.uka.de In-Reply-To: cnb@ipp-garching.mpg.de's message of Wed, 29 Sep 1993 12:37:48 GMT Sender: newsadm@ira.uka.de >>>>> In article <1993Sep29.123748.5659@hubcap.clemson.edu>, cnb@ipp-garching.mpg.de (Christian Brosig INF.) writes: > I'm looking for papers or books on parallel data compression. > Can anyone help me? I hope that I can: These are some references I found in my biblio: No guarantee for correctness, quality or availability. - Alf <<<<<<--CUT HERE---------------------- @InProceedings{Bassiouni88, author = "M. A. Bassiouni and N. Ranganathan and Amar Mukherjee", title = "A Scheme for Data Compression in Supercomputers", booktitle = "Proceedings Supercomputing'88", pages = "272--278", publisher = "IEEE and ACM SIGARCH", address = "Orlando, FL", month = nov, year = "1988", keywords = "mass storage, MSS, in hardware,", abstract = "Does not deal with parallel processing directly, but it does help supercomputers.", note = "F. Central FL.", } @Article{Gajski80, author = "Daniel D. Gajski", title = "Parallel Compressors", journal = "IEEE Transactions on Computers", volume = "C-29", number = "5", pages = "393--398", month = may, year = "1980", keywords = "Associative processors, carry-shower counters, content-addressable memory, elementary logic functions, fast multipliers, high-speed arithmetic, multiple-operand addition, parallel counters Correspondence", } @InProceedings{Newman92, author = "Daniel M. Newman and Dennis L. Goeckel and Richard D. Crawford and Seth Abraham", title = "Parallel Holographic Image Calculation and Compression", booktitle = "Proc. Frontiers '92: Fourth Symp. on Massively Parallel Computation", pages = "557--559", publisher = "IEEE", address = "McLean, VA", month = oct, year = "1992", keywords = "poster session,", } @InProceedings{Ntafos87, author = "Simeon Ntafos and Eliezer Dekel and Shietung Peng", title = "Compression Trees and Their Applications", booktitle = "Proceedings of the 1987 International Conference on Parallel Processing", pages = "132--139", publisher = "Penn State", address = "University Park, Penn.", month = aug, year = "1987", keywords = "Data Structures", } @InProceedings{Reif88, author = "John H. Reif and James A. Storer", title = "Real-Time Dynamic Compression of Video on a Grid-Connected Parallel Computer", booktitle = "Proceeding Supercomputing Projects, Applications and Artificial Intelligence", volume = "1", pages = "453--462", publisher = "Third International Conference on Supercomputing (ICS '88)", address = "St. Petersberg, FL", year = "1988", } @Article{Sijstermans91, author = "Frans Sijstermans and Jan van der Meer", title = "{CD}-{I} Full Motion Video Encoding on a Parallel Computer", journal = "Communications of the ACM", volume = "34", number = "4", pages = "81--91", month = apr, year = "1991", keywords = "CR Categories: C.1.2 [Processor Architectures]: multiprocessors, parallel processors; D.1.3 [Programming Techniques]: concurrent programming, I.4.2 [Image Processing]: Compression (coding) - approximate methods General Terms: Design, Performance, CD-I, interactive video, POOMA", } @InProceedings{Thomborson91, author = "Clark D. Thomborson and Belle W. Y. Wei", title = "Systolic Implementations of a Move-to-Front Text Compressor", booktitle = "Symposium on Parallel Algorithms and Architecture, Computer Architectures News", pages = "53--60", month = mar, year = "1991", keywords = "special issue,", note = "Published as Symposium on Parallel Algorithms and Architecture, Computer Architectures News, volume 19, number 1", } @InProceedings{Tinker??, author = "Michael Tinker", title = "The Implementation of Parallel Image Compression Techniques", booktitle = "Proceeding Supercomputing '88: Technology Assessment, Industrial Supercomputer Outlooks, European Supercomputing Accomplishments, and Performance & Computations", volume = "2", pages = "209--215", publisher = "Third International Conference on Supercomputing (ICS '88)", address = "St. Petersberg, FL", } @InProceedings{AgoSto92, author = "De Agostino and Storer", title = "Parallel Algorithms for Optimal Compression Using Dictionaries with the Prefix Property", booktitle = "Data Compression Conference", publisher = "IEEE Computer Society TCC", year = "1992", } @InProceedings{CarSto92, author = "Carpentieri and Storer", title = "A Split-Merge Parallel Block-Matching Algorithm for Video Displacement Estimation", booktitle = "Data Compression Conference", publisher = "IEEE Computer Society TCC", year = "1992", } @InProceedings{HowVit92, author = "Howard and Vitter", title = "Parallel Lossless Image Compression Using Huffman and Arithmetic Coding", booktitle = "Data Compression Conference", publisher = "IEEE Computer Society TCC", year = "1992", } @InProceedings{StoRei90, author = "Storer and Reif", title = "A Parallel Architecture for High Speed Data Compression", booktitle = "Frontiers of Massively Parallel Scientific Computation", publisher = "National Aeronautics and Space Administration (NASA), IEEE Computer Society Press", year = "1990", } @Article{StoRei91, author = "Storer and Reif", title = "A Parallel Architecture for High-Speed Data Compression", journal = "Journal of Parallel and Distributed Computing", volume = "13", year = "1991", } @TechReport{BadJaJChe92, author = "Bader and Ja'Ja' and Chellappa", title = "Scalable Data Parallel Algorithms for Texture Synthesis and Compression Using Gibbs Random Fields", year = "1992", } @TechReport{StaHir91, author = "Stauffer and Hirschberg", title = "Parallel Data Compression", year = "1991", } @InProceedings{ThoWei89, author = "Thomborson and Wei", title = "Systolic Implementations of a Move-to-Front Text Compressor", booktitle = "Annual ACM Symposium on Parallel Algorithms and Architectures", year = "1989", } @InProceedings{ZitoWolf90, author = "Zito-Wolf", title = "A Broadcast/Reduce Architecture for High-Speed Data Compression", booktitle = "2nd IEEE Symposium on Parallel and Distributed Processing", publisher = "ACM Special Interest Group on Computer Architecture (SIGARCH), and IEEE Computer Society", year = "1990", } @InProceedings{HenRan90, author = "Henriques and Ranganathan", title = "A Parallel Architecture for Data Compression", booktitle = "2nd IEEE Symposium on Parallel and Distributed Processing", publisher = "ACM Special Interest Group on Computer Architecture (SIGARCH), and IEEE Computer Society", year = "1990", } @Article{Storer85, author = "M. E. Gonzalez-Smith and J. A. Storer", title = "Parallel Algorithms for Data Compression", year = "1985", journal = "J. ACM", volume = "32", institution = "Brandeis U", pages = "344--373", keywords = "IMAGE INFORMATION, STATISTICS", } @Article{tinker89a, author = "Michael Tinker", title = "{DVI} Parallel Image Compression", pages = "844--851", journal = "Communications of the ACM", volume = "32", number = "7", year = "1989", month = jul, keywords = "image compression", annote = "", } Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stubbi@gmd.de (Stephan Springstubbe, Z1HR 2337) Subject: Re: Supremum design? Summary: Suprenum (SUPerREchner fuer NUMerische Anwendungen) Keywords: Suprenum, Cluster Organization: GMD, Sankt Augustin, Germany. References: <1993Sep29.123833.6031@hubcap.clemson.edu> Hi Ralf! The Suprenum is an MIMD/SIMD system. You can see a 128-processor system in GMD (Gesellschaft fuer Mathematik und Datenverarbeitung, Sankt Augustin). Each Suprenum computing node comprises a + vector-processing unit (20 MFlop/s with chaining) + processor for program control (MC 68020) + communication processor + 8 MB memory 16 such computing nodes are interconnected via a high-speed bus (320 MB/s) to form a cluster. In addition, each cluster is equipped with a + disk-controller node (2GB hard-disk) + dedicated node for monitoring and diagnosis + 2 communication nodes for linking up with the upper interconnection system The interconnection system is provided by the Suprenum bus system, which connects a 'grid' of clusters, toroidally in each direction, by a double serial bus (125 MB/s). Our greatest configuration comprises 16 clusters interconnected to form a 4x4 matrix, representing 256 computing nodes. The cluster system is accessible by the user via a front-end computer. You can find more informations in: U. Trottenberg, K. Solchenbach: Parallele Algorithmen und ihre Abbildung auf parallele Rechnerarchitekturen. it 30(2), 1988 U. Trottenberg: The Suprenum Project: Idea and Current State Suprenum Report 8 Suprenum GmbH, Bonn, 1988 U. Trottenberg (ed.): Proceedings of the 2nd International Suprenum Colloquium, "Supercomputing based on parallel computer architectures". Parallel Computing, Vol. 7, 3, 1988 H.P. Zima, H.J. Bast, M. Gerndt: SUPERB: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6, 1988 E. Kehl, K.-D. Oertel, K. Solchenbach, R. Vogelsang: Application benchmarks on Suprenum. Supercomputer, March 1991 C.-A. Thole: Programmieren von Rechnern mit verteiltem Speicher. PIK 13, 1990 < This is a short introduction in the Suprenum architecture and Superb > Greetings, Stephan Stephan Springstubbe German National Research Center For Computer Science (GMD) Department of Supercomputing Schloss Birlinghoven 53757 Sankt Augustin Germany Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Alba.Balaniuk@imag.fr (Alba Balaniuk) Subject: Simulator of a processor network needed Organization: IMAG Institute, University of Grenoble, France Hi, I am interested in shared virtual memory (SVM) mechanisms for parallel loosely coupled architectures. I have already designed a SVM server and now I want to implement and validate it. The problem is that I do not have an adequate multiprocessor or processor network to use on the implelmentation. So, does anybody know about a network or a multiprocessor simulator which I can use for this purpose ? If so, is it a public domain software ? You can send the answers to my email: Alba.Balaniuk@imag.fr Thanks for all the information. Alba ------------------------------------------- Alba C. M. M. Balaniuk Laboratoire de Genie Informatique (LGI) IMAG / INPG Grenoble, France ------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stubbi@gmd.de (Stephan Springstubbe, Z1HR 2337) Subject: Re: MPI(message passing interface)standards Summary: MPI-FTP address Organization: GMD, Sankt Augustin, Germany. References: <1993Sep29.124012.6790@hubcap.clemson.edu> Date: Thu, 30 Sep 1993 14:38:08 GMT Hi Yung-Kang! I am not involved in this project (like Rolf Hempel), but informations can be ftp-ed under: info.mcs.anl.gov in directory: /pub/mpi Greetings, Stephan Stephan Springstubbe Department of Supercomputing Schloss Birlinghoven 53757 Sankt Augustin Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm From: "Przemyslaw Stpiczynski" Subject: PVM 3.0 under SCO UNIX Hi, I am trying to install PVM 3.0 under SCO Unix System V/386 Release 3.2, but I have still some problems. Especially I do not know how to update the generic Makefile body for SCO 386. I have tried to update the generic makefile but it doesn't work. So I would be very grateful for the information how to do it. Please, send any suggestions directly to me. Best regards Przemek przem@golem.umcs.lublin.pl Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lusk@mcs.anl.gov (Rusty Lusk) Subject: Re: MPI(message passing interface)standards Reply-To: lusk@mcs.anl.gov (Rusty Lusk) Organization: Argonne National Laboratory, Chicago, Illinois References: <1993Sep29.124012.6790@hubcap.clemson.edu> <1993Sep30.165333.15705@hubcap.clemson.edu> In article <1993Sep30.165333.15705@hubcap.clemson.edu>, stubbi@gmd.de (Stephan Springstubbe, Z1HR 2337) writes: > >I am not involved in this project (like Rolf Hempel), but >informations can be ftp-ed under: > info.mcs.anl.gov >in directory: > /pub/mpi >Greetings, Stephan This is not correct. The right place for MPI information is netlib. Please see Ian Glendinning's post above. Rusty Lusk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Thu, 30 Sep 93 14:51:15 CDT From: degroot@s905.dseg.ti.com (Doug DeGroot) Subject: PARLE '94 - Call for Papers CALL FOR PAPERS ----------------- PARLE'94 (PARALLEL ARCHITECTURES AND LANGUAGES EUROPE) June 13-17, 1994-- Intercontinental Hotel, Athens-Greece Parallel processing is a discipline of strategic significance within the Information Technology sector. The PARLE conferences have been established as major events of world wide reputation, where Academia and Industry meet to exchange ideas and discuss key issues of common interest. PARLE'94 will build upon the successes of the previous conferences and will cover the full spectrum of parallel processing, ranging from theory to design and applications of Parallel Computer Systems. PARLE'94 will include tutorials covering advanced parallel processing techniques and an exhibition featuring many of the leading international suppliers of parallel machines and software. CONFERENCE AREAS. ------------------------------------------------------------------------------ Parallel Machines and Systems. Design of Parallel Programs __ shared, clustered and distributed -- parallel algorithms. machines. -- complexity analysis. -- massively parallel machines. -- specification and verification -- neural networks. -- formal program development -- evaluation, simulation and benchmarking. methodologies. Parallel Operating Systems. Parallel Programming Languages -- scheduling. -- language constructs. -- load balancing. -- semantics. -- memory management. -- programming environments. -- run time systems. -- implementation issues. Design of Parallel Architectures Applications of Parallel Systems -- multiprocessor design issues. -- decision support. -- interconnection networks. -- databases. -- cache organization. -- industrial and business -- specification and verification. applications. -- special purpose architectures. -- scientific computing. SUBMISSION OF PAPERS -------------------------------------------------------------------------------- Authors should send five paper copies of a full draft in English and not exceeding 6000 words to the official conference mailing address (see below) before 19 November 1993. A cover page should contain the author's full name, address, telephone number, FAX number, e-mail address and a 100 word abstract together with a few keywords (taken from the list of conference topics above). To allow submissions to be refereed blind, the author's identity should appear only on the cover page. Authors of high quality papers that cannot be included in the main conference session but deserve presentation will be invited to participate in a poster session to be held during conference. An A-4 page summary of such papers will be published in the proceedings. Authors must include whether or not they would accept such an invitation on the cover sheet. Authors will be notified of acceptance by 11 February 1994. Camera ready copies of accepted papers will be required by 18 March 1994. ========================================================================= A best paper award will be presented at the conference. Also a best paper award will be presented for contributors of age under 30, provided they are the principal authors of the paper. ========================================================================== OFFICIAL CONFERENCE MAILING ADDRESS -------------------------------------------------------------------------- PARLE'94 /CTI 3, Kolokotroni str. 262 21 Patras, Greece. Tel. (+3061) 220 112 FAX. (+3061) 222 086 e-mail. parle@cti.gr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kwb@sturgeon.cs.washington.edu (Kevin W. Bolding) Subject: Re: Message-Passing simulators Organization: University of Washington Computer Science References: <1993Sep30.165228.15292@hubcap.clemson.edu> In article <1993Sep30.165228.15292@hubcap.clemson.edu>, blight@eeserv.ee.umanitoba.ca (David C Blight) writes: |> I am curious if anyone knows of any freely available |> message-passing simulators. I am not looking for anything |> specific as i am just curious about what is available if |> anything. The Chaos Router simulator is available from the University of Washington. This is a flit-based simulator which can simulate dimension-order oblivious routing (packet or wormhole), chaotic adaptive routing, and any other algorithm you'd like to program. Essentially the entire family of k-ary d-cubes (multi-dimensional meshes and tori and hypercubes) are supported. Graphic visualization tools are included as well. If you are interested, let us know. Kevin Bolding Dept. of Computer Science and Engineering University of Washington kwb@cs.washington.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jon@cs.uow.edu.au (Jonathon Gray) Newsgroups: comp.parallel Subject: Australian KSR distributer Organization: University of Wollongong, NSW, Australia. Can anyone give me the name, number, and address of the Australian KSR agent. Jon Gray jon@cs.uow.edu.au Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: frank@per.geomechanics.csiro.au (Frank Horowitz) Newsgroups: comp.parallel,sci.math.num-analysis Subject: F90 LAPACK? (MasPar SVD &/or BLAS) Date: 1 Oct 1993 04:06:51 GMT Organization: CSIRO Exploration & Mining Folks, I'm trying to get an SVD to run on a 1k MP1 (callable from mpl, if anyone cares). I'm currently trying to bring up LAPACK's SGESVD. I've found the (Institutt for Informatikk, University of Bergen, Norway) para//ab's implementation of BLAS for the maspar (send index from maspar; via netlib@nac.no). The problem is, the BLAS implementation requires call parameters in FORTRAN90 array syntax (where appropriate) and ignores the f77 style array dimensions and shape parameters in the call lists. Netlib returns about 40 related routines for SGESVD, and I'm shuddering at trying to get the conversions right by hand... Has any kind soul translated the LAPACK call lists into F90 array syntax? (Of course, I'll be satisfied with SGESVD and friends for the moment :-) Alternatively, does anyone have a MasPar implementation of an SVD via another route? Thanks (as "they" say) in advance! Frank __________________________________________________________________ Frank Horowitz; frank@per.geomechanics.csiro.au CSIRO-Exploration & Mining, POBox 437, Nedlands, WA 6009 AUSTRALIA (09)389-8421 (Int'l +61 9 389 8421), FAX (+61 9 389 1906) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: aus.parallel,comp.parallel,comp.parallel.pvm From: junaid@nella32.cc.monash.edu.au (Mr A. Walker) Subject: Simulating multi-processor networks Organization: Monash University Hi, I'm trying to simulate the communication performance of several multi-processor architectures using distributed memory, message passing. Does anyone know any programs that would simulate an fixed network under varying node message distributions (ie uniform, sphere-of-locality, decaying exponential etc) and give communication load measures at each node (eg number of messages passing through each node link, visit ratios etc). Alternatively, a general simulation programming library or simulation program would be useful. Thanks, Junaid. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.parallel Path: cs.cornell.edu!prakas From: Indu Prakas Kodukula Subject: HPF/F90 Benchmarks Sender: USENET news user Nntp-Posting-Host: hel.cs.cornell.edu Organization: Cornell University, CS Dept., Ithaca, NY Date: Thu, 30 Sep 1993 19:24:33 GMT Apparently-To: comp-parallel@eddie.mit.edu Hi, Does anyone have information about F90/HPF benchmark suites that have been developed and where they are availbale? Thanks in advance. -Indu -- Indu Prakas Kodukula e-mail: prakas@cs.cornell.edu Addr 710 ETC Phone: (607)254-8833 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: Papers for parallel data compression wanted Organization: Professional Student, University of Maryland, College Park References: <1993Sep29.123748.5659@hubcap.clemson.edu> <1993Sep30.165341.15734@hubcap.clemson.edu> In article <1993Sep30.165341.15734@hubcap.clemson.edu> achilles@ira.uka.de (Alf-Christian Achilles) writes: >@TechReport{BadJaJChe92, > author = "Bader and Ja'Ja' and Chellappa", > title = "Scalable Data Parallel Algorithms for Texture > Synthesis and Compression Using Gibbs Random Fields", > year = "1992", This paper has been substantially rewritten, and is available through UMIACS at the University of Maryland, College Park (kellogg@cs.umd.edu = Betty Kellogg, the Technical Report Librarian). The new report has been submitted to a conference and journal for possible publication. Note that this work has been implemented on both a CM-2 and a CM-5, and gives (in my opinion, and from the research that I have seen) the first "real" data parallel algorithm complexity analysis for the newer massively parallel machines like the CM-5. Here is the abstract: This paper introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov Random Field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. Use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented here enables machine independent scalable algorithms for a number of problems in image processing and analysis. -david David A. Bader Electrical Engineering Department A.V. Williams Building - Room 3142-A University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.theory,comp.parallel From: eisenbar@Informatik.TU-Muenchen.DE (Eisenbarth) Subject: Graph Partitioning Keywords: Graph Partitioning Organization: Technische Universitaet Muenchen, Germany Hi, this time I'm interested in your hints, suggestions etc. concerning graph partitioning. The graphs in question are either * bipartite graphs (processors and memory cells as nodes) or * representing multiprocessor networks (processors as nodes). Please send your responses to eisenbar@informatik.tu-muenchen.de. Thanks in advance, Thomas -- Thomas Eisenbarth eisenbar@informatik.tu-muenchen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: UKELLER@CIPVAX.BIOLAN.UNI-KOELN.DE (Udo Keller) Subject: PALLAS announces PM6 for KSR1 Organization: BIOLAN - COLOGNE UNIVERSITY (FRG) Bruehl, Germany, October 1, 1993 PARMACS V6.0 for KSR1 is available now. The port of PARMACS V6.0 to the Kendall Square Research KSR1 has been accepted by KSR. PARMACS is a European standard message passing interface and plays a central role in major benchmark activities. Information, performance figures and more are obtainable from: info@pallas-gmbh.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gadre@parcom.ernet.in Subject: Parallel chemistry code Organization: SUNY/Buffalo Computer Science This is something I received from Center for Development of Advanced Computing , Pune, India about their efforts in parallel programming in applications for chemistry. Since they do not have a news server there, I am posting it on their behalf. ( Further correspondence should be made to gadre@parcom.ernet.in, not me. ) -Milind A. Bhandarkar. ------------------------------------------------------------ September 30,1993 Dear Dr. Harrison : I have received your e-mail inquiring about our efforts in using parallel computers in the field of Chemistry. I am happy to inform you about the sequential and parallel code development by my group in this area. My group at the Department of Chemistry, University of Poona has been working on parallel solutions in Quantum Chemical problems for last four years. During this course, we have developed an ab initio Molecular Orbital Package called INDMOL. The purpose of this package is to generate the wavefunction for a given molecular system, evaluation of various molecular properties and its visualization. The package consists of three modules : one for each of the above functions. The first module INDSCF generates the wave function which is fed to the second module INDPROP where we can generate molecular properties like molecular electrostatic potential (MESP), molecular electron density (MED), electron momentum density (EMD), topography of MESP, MED and EMD, various moments, population analysis etc. The point dependent molecular properties can be visualized with the help of the third module INDVISUAL. Of the above three modules the first two viz. INDSCF and INDPROP are parallelized on distributed memory MIMD machine PARAM developed by C-DAC on their software platform PARAS. The whole parallel effort has resulted into ~30,000 line FORTRAN code. These programs have been tested on 64- and 128- node versions of PARAM. The parallel algorithm is such that there is no significant performance degradation even in presence of any inhomogenity in the hardware. The claim has been tested on PARAM with the use of processors with different ratings. Also a distinct advantage of this parallelization effort is that whole of the code is written indigeneouly, which make it possible to port on any parallel machine of our choice. Sequential version of the above programs are also available on DOS as well as UNIX based systems. The work done in the course of developing the above codes have resulted in the following publications : ----------------------------------------------------------------------------- s.no. title author(s) journal ----------------------------------------------------------------------------- 1. A General Parallel Solution A.C. Limaye J. Chem. Phys. to the Integral transformation S.R. Gadre (in press 1994) and MP2 energy evaluation on distributed memory parallel machines. 2. Graphics visualization of S.R. Gadre J. Mol. Graphics molecular surfaces A. Taspa (in press, 1993) 3. Development of a Restricted R.N. Shirsat J. Comp. Chem. Hartree-Fock program INDMOL on A.C. Limaye 14, 445 (1993). PARAM : A Highly Parallel Computer S.R. Gadre 4. Molecular electrostatics of S.R. Gadre Curr. Sci.(India) [V O ] cluster : a graphics S. Bapat 62, 798 (1992). visualization study using PARAM A. Taspa R.N. Shirsat 5. Parallelization of two-electron S.R. Gadre A Chapter in book integrals in molecular orbital S.A. Kulkarni "Advanced Computing" programs A.C. Limaye Ed. V. P. Bhatkar A. Taspa Tata-McGraw Hill, R.N. Shirsat (1991). 6. Some aspects of parallelization S.R. Gadre Zeit. Phys. - D : of two-electron integrals in S.A. Kulkarni Atoms, Molecules molecular orbital programs A.C. Limaye and Clusters R.N. Shirsat 18, 357 (1991). 7. A General parallel algorithm for S.R. Gadre Chem. Phys. Letters the generation of molecular S.V. Bapat 175, 307 (1990). electrostatic potential maps K.Sundararajan I.H. Shrivastava 8. Computation of molecular electro- S.R. Gadre Computers and static potential : Efficient S. Bapat Chemistry algorithm and parallelization I.H. Shrivastava 15, 203 (1991). ----------------------------------------------------------------------------- I hope that this information will be sufficient. We will appreciate receiving the information you have collected regarding other parallel efforts. Thank you very much. Yours sincerely, Professor S. R. Gadre -- +-----------------------------------------------------------------------------+ | Milind A. Bhandarkar | e-mail: mb@cs.buffalo.edu | | Home Address: | Office Address: | | 116 Englewood Avenue, | Department of Comp. Sci. | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: toomer@CS.ColoState.EDU (christopher toomer) Subject: Need info on weather modeling on parallel machines. Sender: news@yuma.ACNS.ColoState.EDU (News Account) Message-ID: Date: Fri, 01 Oct 1993 19:59:50 GMT Nntp-Posting-Host: beethoven.cs.colostate.edu Organization: Colorado State University, Computer Science Department Hi all, I am writing a research paper for an undergraduate parallel systems class. I have a special interest in weather modelling, and could use some help finding information about modeling weather on parallel machines. Does anyone know of books, theses, or people who are doing research in this area? Please e-mail me at either of the below addresses with any information you have. Thank you for your assistance. Chris Toomer toomer@cs.colostate.edu toomer@lily.aerc.colostate.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: degroot@s905.dseg.ti.com (Doug DeGroot) Subject: PARLE '94 - Call for Papers CALL FOR PAPERS ----------------- PARLE'94 (PARALLEL ARCHITECTURES AND LANGUAGES EUROPE) June 13-17, 1994-- Intercontinental Hotel, Athens-Greece Parallel processing is a discipline of strategic significance within the Information Technology sector. The PARLE conferences have been established as major events of world wide reputation, where Academia and Industry meet to exchange ideas and discuss key issues of common interest. PARLE'94 will build upon the successes of the previous conferences and will cover the full spectrum of parallel processing, ranging from theory to design and applications of Parallel Computer Systems. PARLE'94 will include tutorials covering advanced parallel processing techniques and an exhibition featuring many of the leading international suppliers of parallel machines and software. CONFERENCE AREAS. ------------------------------------------------------------------------------ Parallel Machines and Systems. Design of Parallel Programs __ shared, clustered and distributed -- parallel algorithms. machines. -- complexity analysis. -- massively parallel machines. -- specification and verification -- neural networks. -- formal program development -- evaluation, simulation and benchmarking. methodologies. Parallel Operating Systems. Parallel Programming Languages -- scheduling. -- language constructs. -- load balancing. -- semantics. -- memory management. -- programming environments. -- run time systems. -- implementation issues. Design of Parallel Architectures Applications of Parallel Systems -- multiprocessor design issues. -- decision support. -- interconnection networks. -- databases. -- cache organization. -- industrial and business -- specification and verification. applications. -- special purpose architectures. -- scientific computing. SUBMISSION OF PAPERS -------------------------------------------------------------------------------- Authors should send five paper copies of a full draft in English and not exceeding 6000 words to the official conference mailing address (see below) before 19 November 1993. A cover page should contain the author's full name, address, telephone number, FAX number, e-mail address and a 100 word abstract together with a few keywords (taken from the list of conference topics above). To allow submissions to be refereed blind, the author's identity should appear only on the cover page. Authors of high quality papers that cannot be included in the main conference session but deserve presentation will be invited to participate in a poster session to be held during conference. An A-4 page summary of such papers will be published in the proceedings. Authors must include whether or not they would accept such an invitation on the cover sheet. Authors will be notified of acceptance by 11 February 1994. Camera ready copies of accepted papers will be required by 18 March 1994. ========================================================================= A best paper award will be presented at the conference. Also a best paper award will be presented for contributors of age under 30, provided they are the principal authors of the paper. ========================================================================== OFFICIAL CONFERENCE MAILING ADDRESS -------------------------------------------------------------------------- PARLE'94 /CTI 3, Kolokotroni str. 262 21 Patras, Greece. Tel. (+3061) 220 112 FAX. (+3061) 222 086 e-mail. parle@cti.gr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugen@research.nj.nec.com (Eugen Schenfeld) Subject: REMINDER: MPP-OI Wokrshop Originator: eugen@juliet Keywords: Reconfigurable Parallel Architectures, MPP, Optical Interconnections Sender: eugen@research.nj.nec.com (Eugen Schenfeld) Reply-To: kasinath@research.nj.nec.com Organization: NEC Research Institute Date: Sat, 2 Oct 93 01:09:31 GMT Apparently-To: uunet!comp-parallel ============================================================================= REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER ============================================================================= Call for Participation for The First International Workshop on MASSIVELY PARALLEL PROCESSING USING OPTICAL INTERCONNECTIONS April 26-27, 1994 Westin Hotel Regina Cancun, Mexico Sponsored by: ACM Special Interest Group on Architecture (SIGARCH) The Optical Society of America (OSA) IEEE Technical Committee on Parallel Processing (TCPP) IEEE Technical Committee on Computer Architecture (TCCA) The Air Force Office of Scientific Research (AFOSR) to be held in conjunction with Eighth International Parallel Processing Symposium (IPPS) SYMPOSIUM: The eighth annual International Parallel Processing Symposium (IPPS '94) will be held April 26-29, 1994 at the Westin Hotel Regina, Cancun, Mexico. The symposium is sponsored by the IEEE Computer Society and will be held in cooperation with ACM SIGARCH. IPPS '94 is a forum for engineers and scientists from around the world to present the latest research findings in all aspects of parallel processing. WORKSHOP: The first annual workshop on Massively Parallel Processing Architectures using Optical Interconnections (MPP-OI '94) will be held on the first and second days of the Symposium (April 26-27). The workshop's focus is the possible use of optical interconnections for massively parallel processing systems, and their effect on system and algorithm design. Optics offer many benefits for interconnecting large numbers of processing elements, but may require us to rethink how we build parallel computer systems and communication networks, and how we write applications. Fully exploring the capabilities of optical interconnection networks requires an interdisciplinary effort. It is critical that researchers in all areas of the field are aware of each other's work and results. The intent of MPP-OI is to assemble the leading researchers and to build towards a synergestic approach to MPP architectures, optical interconnections, operating systems, and software development. The workshop will feature an invited address, followed by several sessions of submitted paper, and conclude with a panel discussion entitled, "Ways of Using Optical Interconnections for MPPs In The Near Future (less than 10 years from now)". Authors are invited to submit manuscripts which demonstrate original unpublished research in areas of computer architecture and optical interconnections. The topics of interest include but are not limited to the following: Reconfigurable Architectures Optical interconnections Embedding and mapping of applications and algorithms Packaging and layout of optical interconnections Electro-optical, and opto-electronic components Relative merits of optical technologies (free-space, fibers, wave guides) Passive optical elements Algorithms and applications exploiting MPP-OI Data distribution and partitioning Characterizing parallel applications exploiting MPP-OI Cost/performance studies SUBMITTING PAPERS: All papers will be reviewed by at least 2 members of the program committee. Send five (5) copies of complete paper (not to exceed 15 single spaced, single sided pages) to: Dr. Eugen Schenfeld MPP-OI '94 Workshop Chair NEC Research Institute 4 Independence Way Princeton, NJ 08540 USA Manuscripts must be received by October 30, 1993. Due to the large number of anticipated submissions manuscripts postmarked later than October 30, 1993 risk rejection. (Post overseas submissions air mail.) Notification of review decisions will be mailed by December 31, 1993. Camera ready papers are due January 29, 1994. Fax or electronic submissions will not be considered. Proceedings will be published by the IEEE CS Press and will be available at the symposium. WORKSHOP CHAIR: Eugen Schenfeld NEC Research Institute 4 Independence Way Princeton, NJ 08540 (voice) (609)951-2742 (fax) (609)951-2482 email: MPPOI@RESEARCH.NJ.NEC.COM FOR MORE INFORMATION: Please write (email) to the Workshop Chair. PROGRAM COMMITTEE: Karsten Decker, Swiss Scientific Computing Center, Manno, Switzerland Patrick Dowd, Dept. of ECE, SUNY at Buffalo, Buffalo, NY John Feo, Comp. Res. Grp., Lawrence Livermore Nat. Lab., Livermore, CA Asher Friesem, Dept. of Electronics, Weizmann Inst., Israel Allan Gottlieb, Dept. of CS, New-York University, New-York, NY Joe Goodman, Department of EE, Stanford University, Stanford, CA Alan Huang, Computer Systems Research Lab., Bell Labs., Holmdel, NJ Yoshiki Ichioka, Dept. of Applied Physics, Osaka U., Osaka, Japan Leah Jamieson, School of EE, Purdue University, West Lafayette, IN Lennart Johnsson, Div. of Applied Science, Harvard U. and TMC, Cambridge, MA Israel Koren, Dept. of ECS, U. of Mass, Amherst, MA Raymond Kostuk, Dept. of ECE, U. of Arizona, Tucson, AZ Philippe Lalanne, Inst. D'Optique, Orsay, France Sing Lee, Dept. of EE, UCSD, La Jolla, CA Steve Levitan, Department of EE, U. of Pittsburgh, Pittsburgh, PA Adolf Lohmann, Institute of Physics, U. of Erlangen, Erlangen, Germany Miroslaw Malek, Dept. of ECE, U. of Texas at Austin, Austin TX J. R. Moulic, IBM T. J. Watson Research Center, Yorktown Heights, NY Miles Murdocca, Department of CS, Rutgers University, New Brunswick, NJ John Neff, Opto-elec. Comp. Sys., U. of Colorado, Boulder, CO Viktor Prasanna, Department of EE, USC, Los-Angeles, CA Paul Prucnal, Department of EE, Princeton U., Princeton, NJ John Reif, Department of CS, Duke University, Durham, NC A. A. Sawchuk, Dept. of EE, USC, Los-Angeles, CA Eugen Schenfeld, NEC Research Institute, Princeton, NJ Larry Snyder, Department of CS, U. of Washington, Seattle, WA Harold Stone, IBM T. J. Watson Research Center, Yorktown Heights, NY Les Valiant, Div. of Applied Science, Harvard University, Cambridge, MA CANCUN, MEXICO: The Yucatan peninsula with a shoreline of over 1600 kilometers is one of Mexico's most exotic areas. Over a thousand years ago the peninsula was the center of the great Mayan civilization. Cancun with it's powder fine sand and turquoise water is a scenic haven for sun lovers and archaeological buffs alike, and our Mexican hosts are eager to extend every hospitality for our visit to their part of the world. Air travel to Cancun is available from most major U.S. cities, and U.S. and Canadian citizens do not require visas to visit Mexico. The Westin Hotel Regina is a self-contained meeting facility with spacious, air-conditioned rooms, on-site restaurants, and all the services of a world class hotel. Travel packages to various other nearby hotels (including reduced airfare, and accommodation) are also available from most travel agents. Cancun is a dazzling resort with golf, tennis, and every water sport under the sun, and the area offers exciting nightlife, fabulous shopping, and historic Mayan ruins. ======================================================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: "Andrew Singleton" Subject: Ncube address Date: Fri, 01 Oct 93 13:10:15 -0500 Organization: Creation Mechanics Could someone please post me the name and address of a marketing person at Ncube? Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: steveq@umiacs.umd.edu (Stephen Quirolgico) Subject: Compiler questions Date: 2 Oct 1993 22:37:47 GMT Organization: UMIACS, University of Maryland, College Park, MD 20742 Nntp-Posting-Host: ghidrah.umiacs.umd.edu Hello, I have a number of questions -- any help would be greatly appreciated. 1. Does there exist a Modula* compiler for a SIMD, SAMD, or Cray-T3D machine? I would be very interested in prototypes that exist. 2. Does anyone have any information on the Triton Project from the University of Karlsruhe? 3. Does there exist a parallel logic programming compiler for a SIMD, SAMD, or Cray-T3D? I am particularly interested in a SAMD or T3D compiler. I would be very interested in any prototypes that exist. Thank you, Stephen Quirolgico Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: herlihy@crl.dec.com (Maurice Herlihy) Subject: Tech report: contention in shared-memory algorithms Organization: DEC Cambridge Research Lab A new CRL technical report is available: CRL Technical Report 93/12 Contention in Shared Memory Algorithms Cynthia Dwork, Maurice Herlihy, and Orli Waarts August 6, 1993 Most complexity measures for concurrent algorithms for asynchronous shared-memory architectures focus on process steps and memory consumption. In practice, however, performance of multiprocessor algorithms is heavily influenced by contention, the extent to which processes access the same location at the same time. Nevertheless, even though contention is one of the principal considerations affecting the performance of real algorithms on real multiprocessors, there are no formal tools for analyzing the contention of asynchronous shared-memory algorithms. This paper introduces the first formal complexity model for contention in multiprocessors. We focus on the standard multiprocessor architecture in which n asynchronous processes communicate by applying read, write, and read-modify-write operations to a shared memory. We use our model to derive two kinds of results: (1) lower bounds on contention for well known basic problems such as agreement and mutual exclusion, and (2) trade-offs between latency (maximal number of accesses to shared variables performed by a single process in executing the algorithm) and contention for these algorithms. Furthermore, we give the first formal performance analysis of counting networks, a class of concurrent data structures implementing shared counters. Experiments indicate that certain counting networks outperform conventional single-variable counters at high levels of contention. Our analysis provides the first formal explanation for this phenomenon. To retrieve the abstract for any report or note, send a message saying (for example) "send abstract 90/3" or "send abstract 90/3 90/2", using the number of the desired technical report to: techreports@crl.dec.com (Internet) crl::techreports (DECnet) To retrieve the PostScript for any report or note, send a message saying (for example) "send postscript 90/3" or "send postscript 90/3 90/2", using the number of the desired technical report to: techreports@crl.dec.com (Internet) crl::techreports (DECnet) Abstracts and PostScript versions of the CRL technical reports are also available via anonymous FTP to crl.dec.com, in the directory /pub/DEC/CRL/{abstracts,tech-reports}. To be added to the mailing list for announcements of CRL technical reports, send mail to techreports-interest-request@crl.dec.com (Internet) crl::techreports-interest-request (DECnet) Michelle Gillespie Cambridge Research Lab michelle@crl.dec.com Digital Equipment Corp. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dchan@spock.usc.edu (Daniel Chan) Subject: Stanford DASH project Organization: University of Southern California, Los Angeles, CA I am interested in learning more about the DASH project, and am looking for ftp sites that I can download related papers and laboratory reports from. Thank you for your time. Dan Chan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stubbi@gmd.de (Stephan Springstubbe, Z1.HR 2337) Subject: Re: HPF/F90 Benchmarks Reply-To: stubbi@gmd.de Organization: GMD, Sankt Augustin, Germany. References: <1993Oct1.124747.4422@hubcap.clemson.edu> Hi! I don't know where other people get (official) informations [hi, Rusty :-)] about HPF/Fortran-D Benchmarking Suite, but you can try it at: minerva.npac.syr.edu under: /benchmark/suite Greetings, Stephan Stephan Springstubbe German National Research Center For Computer Science Department of Supercomputing Schloss Birlinghoven Germany Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: chipr@joseph.WPI.EDU (Norman E Rhodes) Subject: Benchmarks Date: 4 Oct 1993 11:40:56 GMT Organization: Worcester Polytechnic Institute Could someone please mail me the names and/or locations of some source code for parallel computing benchmarks. Any response would be appreciated. mail to : chipr@wpi.wpi.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fismith@ac.dal.ca Subject: Parallel Debuggers, Performance Monitors Organization: Dalhousie University, Halifax, Nova Scotia, Canada Hi: Would someone please give me some pointers into the current state of the art for parallel debuggers and performance monitors? Thanks in advance, -- Frank Smith email: frank@arraysystems.nstn.ns.ca Array Systems Computing Inc. Phone: (902) 468-8991 1000 Windmill Road, Suite 10 FAX: (902) 468-8980 Dartmouth, N.S. Canada B3B 1L7 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ghsu%unstable@relay.nswc.navy.mil (Guan-Hsong Hsu) Subject: [Q] LINDA or similar type of compilers Organization: Naval Surface Warfare Center Dear Netters, This is really not quite "massively parallel" computation, but a first step toward it, we hope. We need some advise and suggestion about the cost and benefit of compilers such as LINDA. Please direct m to a more appropriate news group, or suggest information, readings, etc. Any comment/suggestion is welcome. We will have a nework of 6 HP 700 series workstations (some 735s and some 720s) connected through ethernet. In order to (1) better utilize the system resource, and (2) simulate a parallel computing environment that we hope to move into, we are considering getting one of those compilers, such as LINDA, to better utilize the resources. Now here are my questions: (1) What is the appropriate way to describe compilers like LINDA that runs on a network and distribute resources, or simulates a parallel computer on a network. For now, I'll call then network compilers. (2) A large body of our codes are written in FORTRAN. Does that make our use of network compilers difficult? Can it be done, through mixed language programming, i.e., write main driver in C and call FORTRAN modules to do the job, or do we have to rewrite codes in C to use the compilers? Can we avoid mixed language? (3) What configuration of computers is better for the network compiler working efficiently? Most system with almost equal speed and memory size, or one powerful fast one as server? Or, it might not matter? (4) I saw people asking questions about simulator programs in this news group, is there a network compiler that simulates (in a scaled down way) some truly massively parallel computers, say the Intel Paragon? Thank you all in advance. Guan-Hsong Hsu Email: ghsu@unstable.nswc.navy.mil Nonlinear Dynamics Group (301)394-5289, 394-5290 Mathematics and Computations Branch (B40) NSWC, Silver Spring, MD 20903-5640 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: snoo@uni-paderborn.de (Holger Naundorf) Subject: Q:virtual shared memory Organization: Uni-GH Paderborn, Germany Nntp-Posting-Host: noether.uni-paderborn.de Keywords: shared memory Does anyone know a public domain software that simulates a shared memory maschine on a network of Sun workstations? Thanks in advance, Holger Naundorf snoo@uni-paderborn.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rajesh@npac.syr.edu (Rajesh R. Bordawekar) Subject: Looking for Parallel Code for out-of-core solvers Hi I was looking for parallel code performing out-of-core computations, (for example LU, CG, SVD etc). Preferably the code should be written for either Intel (i860/delta/Paragon) or NCube. Please send your replies to rajesh@tonto.npac.syr.edu Thanks rajesh --------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tony@aurora.cs.msstate.edu (Tony Skjellum) Subject: Scalable Parallel Libraries Conference, Final Program Organization: Mississippi State University Nntp-Posting-Host: aurora.cs.msstate.edu Summary: Here is the final program Keywords: Scalable Parallel Libraries *** FINAL PROGRAM *** Scalable Parallel Libraries Conference (including Multicomputer Toolbox Developers' & Users' Meeting) October 6-8, 1993 National Science Foundation Engineering Research Center for Computational Field Simulation, Mississippi State, Mississippi The Scalable Libraries Conference is designed to . complement other, larger, general conferences in high performance computing, . bring together key researchers from national laboratories, industry, and academia to effect technology transfer of high performance computing software and effect an interchange of ideas, . include vendors as attendees, but not include promotional activities, . provide invited talks on topics involving application of scalable libraries in applications, developments in scalable libraries, and related issues (such as message-passing technology), . provide for contributed posters to allow further interchange of ideas, . introduce potential users to the work done at our center on the scalable libraries, a specific example of technology we hope to transfer. , provide user of scalable libraries with an opportunity to impact research in the area (ie, complain/suggest needs/requirements), . There will be no parallel sessions (except that posters will be displayed for at least one full day). . A proceedings of the invited papers and contributed posters will be made; the paper deadline will be at the conference. **** Wednesday, October 6 **** 8:00-9:00am Registration (Registration will continue all day) 9:00am-9:10am Welcome, Prof. Don Dearholt, Head, CS Department 9:10am-9:15am Introduction of Keynote speaker Charles L. Seitz, [intro by Anthony Skjellum] **** KEYNOTE ADDRESS **** 9:15am-10:15am Charles L. Seitz, Caltech, Title: "High-Performance Workstations + High-Speed LANs >= Multicomputers" 10:15am - 10:30am, BREAK 10:30am-11:15am Louis Turcotte, WES (Vicksburg) Title: "The National High Performance Distributed Computing Consortium -- A Proposal" 11:15am-12:00n David Womble, Sandia ALBQ Title: "Out of core, out of mind: making parallel I/O practical". 12:00n - 1:00pm Lunch at ERC [Lunch each day included with Registration Fee] 1:00pm - 1:45pm Robert Falgout, LLNL Title: "Modeling Groundwater Flow on Massively Parallel Computers" 1:45pm - 2:30pm Milo R. Dorr, LLNL Title: "A Concurrent, Multigroup, Discrete Ordinates Model of Neutron Transport" 2:30pm - 2:45pm, BREAK 2:45pm-3:30pm Nikos Chrisochoides, NPAC Syracuse Title: "An alternative to data-mapping for scalable iterative PDE solvers : Parallel Grid Generation" 3:30pm - 4:15pm Alan Sussman, U. Maryland Title: "Design of a Runtime Library for Managing Dynamic Distributed Data Structures", 4:15 - 5:00pm S. Lennart Johnsson, Harvard University and Thinking Machines Corp. Title: "Scientific Libraries on Scalable Architectures" 7:00pm - 9:30pm (or so) *** INFORMAL DINNER AT OBY's RESTAURANT [ATTENDEES PAY] *** Thursday, October 7 8:30am - 8:45am, Welcome, Joe F. Thompson, Director of NSF ERC 8:45am - 9:30am Sanjay Ranka, NPAC Syracuse Title: "Scalable Libraries for High Performance Fortran" 9:30am - 10:15am Dan Quinlan, LANL Title: "Run-time Recognition of Task Parallelism Within the P++ Parallel Array Class Library" 10:15am - 10:30am, BREAK 10:30am - 11:15am William Gropp, Argonne National Laboratory Title: "Scalable, Extensible, and Portable Numerical Libraries" 11:15am-12:00n Anthony Skjellum, MSU/NSF ERC Title: "The Multicomputer Toolbox: Current and Future Directions" 12:00n - 1:00pm Lunch at ERC 1:00 - 1:45pm David Walker, ORNL Title: "The design of scalable linear algebra libraries for concurrent computers" 1:45 - 2:15pm Dan Reed, UIUC Title: "Performance Evaluation and Presentation Techniques for Massively Parallel Systems" 2:30pm - 2:45pm BREAK 2:45pm - 3:30pm Padma Raghavan, UIUC/NCSA Title: "Parallel Solution of Linear Systems using Cholesky Factorization" 3:30pm - 4:15pm Anna Tsao, SRC Title: "The PRISM Project: Infrastructure and Algorithms for Parallel Eigensolvers" 4:15pm - 5:00pm Chuck Baldwin, UIUC Title: "Dense and Iterative Concurrent Linear Algebra in the Multicomputer Toolbox" 5:00pm - 5:45pm Steve Lederman, SRC Title: "Comparison of Scalable Parallel Matrix Multiply Libraries" 7:00pm - 9:30pm (or so) *** CONFERENCE BANQUET AT STATE HOUSE HOTEL [MEAL INCLUDED WITH REGISTRATION] *** [There will be no speech at the banquet, so please attend. Just a short "hello" by one of the MSU principals.] **** Friday, October 8 **** 8:45am - 9:30am Steven Smith, LLNL Title: "High-Level Message-Passing Constructs for Zipcode 1.0: Design and Implementation" 9:30am - 10:15am Charles H. Still, LLNL Title: "The Multicomputer Toolbox: Experiences with the Meiko CS-2". 10:15am - 10:30am, Break 10:30am-11:15am Ewing Lusk, Argonne National Laboratory Title: "The MPI Communication Library: Its Design and a Portable Implementation" 11:15am-12:00n Anthony Skjellum, MSU/NSF ERC Title: "Building Parallel Libraries using MPI" 12:00n - 1:00pm Lunch at ERC 1:00p- 1:45pm Linda Petzold, AHPCRC & UMN Title: "Solving Large-Scale Differential-Algebraic Systems via DASPK on the CM5" 1:45pm - 2:30pm Dan Meiron, Caltech Title: "Using Archetypes to Develop Scientific Parallel Applications" 2:30pm - 2:35pm Conference Concluding Remarks, Anthony Skjellum **** CONFERENCE ENDS AT 3:00pm **** Contributed Posters (on display all day on Thursday, October 7): Poster Setup... afternoon on October 6, starting 8am on October 7. 1. Edward Luke, MSU/ERC, Mississippi State University, Title: "The Definition and Measurement of Scalable Parallel Algorithms" 2. Leah H. Jamieson, Ashfaq Khokhar, Jamshed Patel, and Chao-Chun Wang School of Electrical Engineering, Purdue University Title: "A Library-Based Program Development Environment for Parallel Image Processing" 3. David Koester, Sanjay Ranka, and Geoffrey Fox, NPAC Syracuse University, Title: "Parallel Block-Diagonal-Bordered Sparse Matrix Algorithms for Electrical Power System Applications" 4. Alvin P. Leung, NPAC, Syracuse University; Anthony Skjellum, MSU/ERC, Title: "Concurrent DASSL: A Second-Generation, DAE Solver Library" 5. Antoine P. Petitet, University of Tennessee, Computer Science Department Title: "Implementation and use of scalable parallel libraries to solve the nonsymmetric eigenproblem" 6. Roldan Pozo, University of Tennessee, Computer Science Department Title: "ScaLAPACK++: An Object Oriented Linear Algebra Library for Scalable Systems" 7. Rahul Bhargava, Geoffrey Fox, Chao-Wei Ou, Sanjay Ranka and Virinder Singh, NPAC Syracuse University, Title: "Scalable Libraries for Graph Partitioning" 8. Subhash Saini and Horst D. Simon, Numerical Aerodynmical Simulation Facility, NASA Ames Research Center Title: "Performance of BLAS 1, 2 and 3 on NAS Intel Paragon XP/S-15" 9. Kamala Anupindi, NPAC Syracuse University, Anthony Skjellum, MSU/ERC, Paul Coddington and Geoffrey Fox, NPAC Syracuse University, Title: "Parallel Differential Algebraic Equations (DAEs) Solvers for Electrical Power System Transient Stability Analysis" 10. Steven F. Ashby, Robert D. Falgout, Steven G. Smith, Andrew F. B. Tompson, Lawrence Livermore National Laboratory, Title: "High Performance Computing Strategies for Detailed Simulation of Subsurface Flow and Chemical Migration" 11. Jaeyoung Choi, Oak Ridge National Laboratory, Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, David W. Walker, Oak Ridge National Laboratory Title: "Parallel Matrix Transpose Algorithms on Distributed Memory Concurrent Computers" 12. George Adams III and Allan D. Knies, Purdue University, Title: "The Integrated Library Approach to Parallel Computing" -- . . . . . . . . . "There is no lifeguard at the gene pool." - C. H. Baldwin - - - Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wang@astro.ocis.temple.edu (Jonathan Wang ( the-wang )) Subject: Parallel Algorithm for Linear Equations? I am looking for a parallel algorithm to solve a 5000x5000 matrix of linear equations. I did poorly in college math, but I seem to remember there were only sequential solutions -- at least in my text book. Any parallel people out there can give me a hand? Thanks in advance. --wang P.S. I realized this is a moderated group, ... but I don't know whom I should contact ... anyway, my apologies for any trouble I made. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: elias@TC.Cornell.EDU (Doug Elias) Newsgroups: comp.parallel,comp.parallel.pvm,comp.sys.super,comp.arch Subject: Parallel Programming Environments Survey -- Update Date: 4 Oct 93 13:21:09 Organization: Software and Consulting Support Group, Cornell Theory Center, C.U. Nntp-Posting-Host: wonton.tc.cornell.edu Greetings... Once again, many thanks to the growing number of you who have taken the necessary time out of your busy schedules to complete and return the PPE survey i posted to these newsgroups a couple of weeks ago, and which is still available via anon-ftp on theory.tc.cornell.edu as /pub/PPE_Survey (for those of you without ftp access, just drop me a note and i'll email you copy). i've been repeatedly asked to consider relaxing two of the restrictions for the inclusion of a given PPE in the survey, those being "in use at >= 5 sites" and "runs on more than 1 type of machine". i'm sure the reasons for these requirements are perfectly clear (no "research projects", and "widespread applicability"), but i have no intention of arbitrarily limiting information-flow, so consider those restrictions relaxed. A few comments regarding the rather intimidating length of the survey: 1) of all respondents-to-date, only *1* required more than 40 minutes, and that one took about an hour; 2) if you don't think some of the characteristics are relevant, skip 'em -- i'll take whatever i can get; 3) at the VERY LEAST: look over the list of major categories at the end of the actual survey, and indicate what you think their importance-ordering is; 4) i ain't kidding: people from all-over-the-world are asking for copies of the results -- your opinions are of very great interest to a large group of people! Please -- keep the survey in a window on your workstation and put a few minutes into it every once in a while, take a copy home and fill it in while you're watching TV (i'll take hardcopy, hell, i'll take ANYTHING), any way that works for you: the only thing that's important is that others are given the benefit of your experiences and opinions. Thanks, and i'm definitely looking forward to tipping a few (on me...what a pun) with all of you who've done me the favor of responding. doug -- # ____ |Internet: elias@tc.cornell.edu #dr _|_)oug|USmail: Sci.Comp.Support/Cornell Theory Center # (_| | 737 TheoryCtrBldg/C.U./Ithaca/N.Y./14853-3801 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dduke@ibm1.scri.fsu.edu (Dennis Duke) Subject: Cluster Workshop '93 - Second Announcement -- CLUSTER WORKSHOP '93 Second Announcement NOVEMBER 1, 1993 DEADLINE FOR ABSTRACTS Supercomputer Computations Research Institute Florida State University Tallahassee, Florida 32306 Tuesday, December 7 - Thursday, December 9, 1993 NEW THIS YEAR Special Tutorial/Vendor Day Monday, December 6, 1993 Organizers: Sudy Bharadwaj (sudy@sca.com), SCA Peter Dragovitsch (drago@scri.fsu.edu), SCRI, FSU Dennis Duke (dduke@scri.fsu.edu), SCRI, FSU Adam Kolawa (ukola@flea.parasoft.com), Parasoft Tim Mattson (tgm@SSD.intel.com), Intel Neil Lincoln (nrl@nips.ssesco.com), SSESCO Vaidy Sunderam (vss@mathcs.emory.edu), Emory University Cluster Workshop '93 continues the series of cluster computing workshops held at SCRI in 1991 and 1992. The nature of the workshop (submission, sessions, proceedings) is deliberately designed to enable maximum dissemination and exchange of information in the most timely manner possible. The goal of the workshop is to bring together people who are interested in the issues of using heterogeneous clusters of computers as computational resources. This group of people would include: - computational scientists or other end-users who desire to exploit the power and capabilities of heterogeneous clusters as an alternative to or in conjunction with conventional supercomputers or MPP's - software developers of queuing systems, parallel processing packages, and other software tools - system administrators interested in both the strategic and technical issues of running clusters, including in general any kind of heterogeneous collection of networked computers - vendors who are developing products to serve this market: (a) new generations of high performance workstations (b) new forms of packaging of workstations (c) new high speed networking products (d) new products for addressing mass storage and other I/O needs (e) any other applicable technology - persons who would like to share their experiences, and especially give critical advice on desired user requirements, shortcomings of present hardware and software configurations, successes and failures to date, overviews of planned projects, etc. Although most practical implementations of clusters to date consist of collections of RISC workstations, we are interested also in more general configurations which might include any number of distinct architectures, and a variety of new high-speed network interconnections. The format of the workshop will be to fill the day with contributed and invited talks, and to have informal evening gatherings designed to further enhance the opportunity for information exchange. We especially encourage contributions of a 'practical experience' nature, since this is likely to be of the most use to the many people who are involved in cluster computing, and will also be complimentary to the many conferences that stress more the academic side of computing and computer/computational science research. The tentative outline schedule for the workshop for Tuesday through Thursday is included below. We will adjust it depending upon the number of contributed papers. NEW FOR THIS YEAR will be a special tutorial/vendor day, Monday, December 6. The tentative program for that day is attached below. There will be no extra charge for registered workshop attendees for the tutorial/vendor day. Please register for the workshop using the form attached below. We encourage as many attendees as possible to plan to make a presentation at the workshop. We do ask that speakers plan to address topics within the scope outlined above. Please send us a short abstract describing your talk, so we can plan an appropriate place in the schedule. An announcement of the schedule of speakers will be distributed as soon as possible. Within the limits of available time, we would like to accommodate as many speakers as practicable. PLEASE SUBMIT ABSTRACTS BY MONDAY, NOVEMBER 1, 1993. The 'proceedings' of the workshop will be published via anonymous ftp. We will request each speaker to send us an appropriate electronic version of his talk (ascii, postscript, tex, latex, troff, etc.). These will then be placed on the machine ftp.scri.fsu.edu for further distribution. The proceedings of the 1991 and 1992 meetings are already on the machine. Any questions or requests can be sent via email to cluster-workshop@scri.fsu.edu or to one of the organizers. SCRI can be reached by phone at (904)-644-1010. ============================================================================== OUTLINE SCHEDULE Monday, December 6 8:00 am - 10:00 pm Tutorial/Vendor Program - Conference Center (see detailed schedule below) 6:00 - 8:00 pm Registration and Reception - Radisson Hotel Tuesday, December 7 7:30 - 8:30 Continental Breakfast and Registration - Conference Center 8:30 - 9:15 Invited Speaker Number 1 9:15 - 10:00 Invited Speaker Number 2 10:00 - 10:30 Break 10:30 - 12:00 Session 1 10:30 - 12:00 Session 2 12:00 - 1:00 Lunch 1:00 - 3:00 Plenary Session 3:00 - 3:30 Break 3:30 - 5:30 Session 3 3:30 - 5:30 Session 4 6:30 - 10:00 Hosted Reception at SCRI 6:30 - 8:00 Demos at SCRI 8:00 - 9:30 Moderated Session Wednesday, December 8 7:30 - 8:30 Continental Breakfast and Registration - Conference Center 8:30 - 9:15 Invited Spekaer Number 3 9:15 - 10:00 Invited Speaker Number 4 10:00 - 10:30 Break 10:30 - 12:00 Session 5 10:30 - 12:00 Session 6 12:00 - 1:00 Lunch 1:00 - 3:00 Session 7 1:00 - 3:00 Session 8 3:00 - 3:30 Break 3:30 - 6:00 Session 9 3:30 - 6:00 Session 10 6:30 - 10:00 Hosted Reception at SCRI 6:30 - 8:00 Demos at SCRI 8:00 - 9:30 Moderated Session Thursday, December 9 7:30 - 8:30 Continental Breakfast - Conference Center 8:30 - 10:00 Session 11 8:30 - 10:00 Session 12 10:00 - 10:30 Break 10:30 - 12:00 Session 13 10:30 - 12:00 Session 14 12:00 Workshop Ends ============================================================================== Cluster Workshop 93 Tutorial/Vendor Day FSU Conference Center Monday, December 6, 1993 For information contact Louis Turcotte (turcotte@bulldog.wes.army.mil) Sponsors: Hewlett-Packard (confirmed) others (invited) 7:45 - 8:00 Gather (Refreshments) 8:00 - 8:30 Welcome/Intro/Overview of day o Dennis Duke (SCRI) (Confirmed) o Louis Turcotte (MSU/ERC) (Confirmed) 8:30 - 9:30 Overview of Batch Environments (Tutorial) o Michael Nelson (NASA Langley Research Center)(Confirmed) 9:30 - 10:00 Break (Refreshments) 10:00 - 12:00 Batch product presentations o Condor: Miron Livny (UofWisc/Madison) (Confirmed) o DQS: Dennis Duke (SCRI) (Confirmed) o LoadLeveler: (IBM) (Confirmed) o LSF: Songnian Zhou (Platform Computing) (Confirmed) o NQS: (Sterling Software) (Invited) o TaskBroker: (HP) (Confirmed) 12:00 - 1:00 Lunch (Box) 1:00 - 3:00 Overview of Parallel Environments (Tutorial) o Sudy Bharadwaj (SCA) (Confirmed) o Tim Mattson (Intel) (Confirmed) o Doug Elias (Cornell Theory Center) (Confirmed) 3:00 - 3:30 Break (Refreshments) 3:30 - 5:30 Parallel product presentations o Linda: (SCA) (Confirmed) o PAMS: Wayne Karpoff (Myrias Computer Technologies) (Confirmed) o p4: Ewing Lusk (Confirmed) (Argonne National Laboratory) o Express: Adam Kolawa (Parasoft) (Confirmed) o PVM: Vaidy Sunderam (Emory University) (Confirmed) o xHPF: Bob Enk (Applied Parallel Research) (Confirmed) 7:00 - 10:00 Hardware vendor presentations and refreshments o Convex: Brian Allison (Confirmed) o DEC: (Confirmed) o HP: Mark Pacelle (Confirmed) o IBM: (Confirmed) o SGI: (Invited) o SUN: (Invited) =============================================================================== REGISTRATION AND HOTEL INFORMATION WORKSHOP ON RISC CLUSTER COMPUTING December 7-9, 1993 PLEASE TYPE OR PRINT Name _____________________________________ Social Security Number ___________________ (your SSN is optional, but without it any request for a registration refund will be delayed) Company __________________________________ Address/Mailstop __________________________________________________ City/State/Zip/Country ____________________________________________ Phone (___)______________________ Email address _______________________________________ Workshop Program Number: 1902694 The registration fee for the workshop is $145, and includes three continental breakfasts, Tuesday and Wednesday lunches, morning and afternoon break refreshments, and the food and drink for the evening sessions. There is no extra charge for the Monday tutorial/vendor day, but attendees for that day must be registered for the whole workshop. _____ Check here if you will attend the Monday tutorials. If you want to pay by check, please print out and fill in the above registration form, including especially the Workshop Program Number, and return the registration form and (payable to FSU) fee to: Center for Professional Development and Public Service Conference Registrar Florida State University Tallahassee, Florida 32306-2027 If you want to register by credit card, you may register by email by sending back an edited form of the above registration information, as well as the following credit card information: Credit Card Name (Mastercard or Visa ONLY)_____________________________ Credit Card Number ___________________________ Name (as it appears on card) __________________________________ Expiration Date of Card _____________________ There is an additional 2% charge by the University for credit card registrations (bringing the required total to $148). Email registrations, and all other inquiries about the workshop, may be sent to: email: cluster-workshop@scri.fsu.edu, or fax: (904)-644-0098, attention of Pat Meredith Hotel Information: The workshop hotel is the Radisson, which is within walking distance of the conference center and the FSU campus. Rooms are $60 per night, single or double, and reservations should be made by November 10. Be sure to mention Cluster Workshop '93 to get the special rate. Radisson Hotel 415 North Monroe Street Tallahassee, FL 32301 Phone and Fax: (904) 224-6000 ========================================================================== Dennis Duke phone: (904) 644-0175 Supercomputer Computations Research Institute fax: (904) 644-0098 Florida State University, B-186 email: dduke@scri.fsu.edu Tallahassee, Florida 32306-4053 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jacobj@research.CS.ORST.EDU (Joseph Jacob) Subject: FAT TREES. Message-ID: <28q0omINN2r3@flop.ENGR.ORST.EDU> Article-I.D.: flop.28q0omINN2r3 Posted: Mon Oct 4 13:23:50 1993 Organization: Computer Science Department, Oregon State University NNTP-Posting-Host: ruby.cs.orst.edu Hi, Do fat trees and hypertrees have the same network topology . If there is a subtle difference can someone point out the difference to me ?. If both mean the same thing why the necessity for two names ? . I happened to read the paper by Charles Leiserson ( 'xcuse spelling ) , the man behind fat trees . He makes no reference to hypertrees .... . Thanx . -JJ. jacobj@research.cs.orst.edu . Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gornish@sp1.csrd.uiuc.edu (Edward H. Gornish) Subject: NAS Parallel Benchmarks Message-ID: Organization: Center for Supercomputing R&D About a month ago, someone posted the following information regarding how to acquire the NAS Parallel Benchmarks: > Questions regarding *serial* versions of the source code go to > bm-codes@nas.nasa.gov. > Questions regarding *parallel* versions of the source code go to Eric > Barszcz, barszcz@nas.nasa.gov. Regarding the serial versions, I sent email to the above *serial* address, and I have received no reply. It's been about one month. Regarding the parallel versions, I sent email to the above *parallel* address, and I was sent a form to fill out. I filled out and returned the form. I received no further reply. Again, It's been about one month. So does anyone know of another way of obtaining these codes? How large are they? Can they be sent via email? Is there a site from which I can ftp them? thanks -- Eddie Gornish University of Illinois - Center for Supercomputing Research & Development gornish@csrd.uiuc.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: liu@issi.com (Guizhong Liu) Subject: MPP maker companies Organization: International Software Systems, Inc. ISSI Reply-To: liu@issi.com Dear Netters: I want to contact the companies making MPP supercomputers to get some information. If you know any of them, address, email address or phone numbers, please send reply to : liu@issi.com. Thanks in advance. G. Liu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bianches@hp1.sm.dsi.unimi.it (pietro bianchessi) Subject: Help: MAX-FLOW & g-bipartite subgraph Organization: Computer Science Dep. - Milan University Summary: Help: MAX-FLOW & g-bipartite subgraph Keywords: MAX-FLOW Hi, I need an help about these two questions: -Is there a parallel algorithm to find the maximum generalized bipartite subgraph of a graph G=? -Which is the best parallel algorithm to calculate the maximum flow? Pietro Bianchessi Dept of CS, Milan, Italy bianches@ghost.dsi.unimi.it P.S. All the answers, in particular referred either to the EREW PRAM model or to the CM, are well accepted. Thank-you. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Organization: Valencia University (Universitat de Valencia) SPAIN From: Rogelio Montanana (MONTANAN at EVALUN11) Newsgroups: comp.lang.fortran,comp.parallel,comp.parallel.pvm,comp.sys.super Subject: SHARE Europe AM93 HPC Day Program Attached please find the titles, speakers and abstracts of the talks that will take place during the High Performance Computing Day, part of the SHARE Europe Anniversary meeting, to be held in The Hague, The Netherlands, on Wednesday October 27, 1993. There is a special one-day fee for those only interested in the HPC Day. If you are interested in attending, please contact SHARE Europe HQ (48, route des Acacias, CH 1227, Carouge, Switzerland, Phone +41 22 3003775, Fax +41 22 3001119). ........................................................................ Session 3.1H Time: 8:45-9:45 Title: An overview of HPC-systems Speaker: Ad Emmen (SARA, Amsterdam) Abstract: There are many different types of architectures available when it comes to high-performance computing, ranging from traditional parallel vectorcomputers to message-passing systems with thousands of processors. Furthermore, people are clustering together powerfull workstations into one parallel system to get hihjer performance. The lecture will give an overview of the main architectures employed, try the classify the available (commercial) systems and comment on the worldwide distribution. CV: Ad Emmen is Manager User support at SARA, the Dutch national supercomputer center. He is involved in high-performance computing since 1983 and is editor of several journals in the field, including "Supercomputer" and "Supercomputer - European Watch". ........................................................................ Session 3.2H Time: 10:15-11:10 Title: SP1: The first benchmarks Speaker: Joanne Martin (IBM, Kingston) Abstract: The 9076 SP1 is IBM's first offering in the Scalable POWERparallel Series. Based on RISC System/6000 processors, the SP1 is comprised of 8 to 64 processors and has a peak system performance of 1 to 8 GFLOPS. In this talk, we will discuss the effective performance of the SP1 based on a series of benchmarks, which includes synthetic routines to measure specific system components, computational kernels that demonstrates performance on commonly used routines, well known benchmarks for which performance data is expected in the supercomputing community (e.g. TPP, SPEC/Perfect), and full applications and production use. CV: Joanne L. Martin, Ph.D. is a Senior Technical Staff Member and Manager, of Applications Technology and High-Performance Studies. Joanne joined IBM as a Research Staff Member at the T.J. Watson Research Center in November, 1984. She had received her Ph.D. in mathematics from The Johns Hopkins University in 1981 and conducted research in performance evaluation for supercomputers at the Los Alamos National Laboratory prior to joining IBM. In May, 1991, Joanne was appointed manager of her present department, and is responsible for performance analysis and modeling for IBM's Scalable Parallel systems, within the newly created POWER Parallel business unit. In January, 1993 Joanne was appointed a member of IBM's Senior Technical Staff. Joanne has maintained active participation in the external scientific community, as founding editor-in-chief of the International Journal of Supercomputer Applications, serving as the General Chair of Supercomputing 90, being sought as an advisor to the Dept. of Energy and the National Science Foundation and she was named to Who's Who in Science and Engineering for 1992-1993. ........................................................................ Session 3.3H Time: 11:15-12:10 Title: Discussion of customer requirements and scalable parallel future. Speaker: Joanne Martin (IBM, Kingston) Abstract: During this session Dr Joanne Martin address the requirements provided by Share Europe during Share SM 93. She will also discuss the possible evolution of the Scalable Parallel Systems. ........................................................................ Session 3.4H Time: 14:00-14:45 Title: Overview of PVM Speaker: John Zollweg (Cornell Theory Center) Abstract: Parallel Virtual Machine (PVM) has become a very popular message- passing environment on many systems of workstations with TCP/IP communication. The standard PVM package will be briefly described. Some enhancements to this system have been made by IBM that allow one to run PVM over a point-to-point switch with low message latency. Results will be presented for some scientific applications that require frequent communication, showing the effects of the enhanced PVM. CV: John Zollweg is Project Manager for the Strategic Applications Program in Cornell Theory Center. He has had extensive experience with parallel processing, including PVM, both as a researcher who used the Theory Center's platforms for many years (in chemical engineering), and now as a Theory Center staff member coordinating early use by researchers of the strategic platforms. John is now involved in a Joint Verification Study with IBM. He has already ported his own research code to the SP1, and is co-chairing the SP1 Early User Workshop in June. ........................................................................ Session 3.5H Time: 14:50-15:35 Title: High Performance Fortran Speaker: Clemens-August Thole (GMD-I1T) Abstract: The architecture of high performance computers gets more complex. Large numbers of parallel nodes and non-uniform memory access times are challenges for compilers, which make efficient code generation for sequential Fortran 90 programs very difficult. In order to generate better code compilers need additional information like the specification of - opportunities for parallel execution - distribution of data objects onto the memories of the nodes The High Performance Fortran Forum (HPFF) was founded as a coalition of about 40 industrial and academic groups (including most vendors of parallel computers), who had several meetings since March 1992 in order to define language extensions to Fortran 90 (High Performance Fortran). A final document was accepted in March 1993. HPF supports especially the mapping of data parallel program on parallel computers. Compiler directives specify the distribution of data objects. Some language extensions to Fortran 90 enhance the possibilities to write Fortran 90 programs in such a way that they can be executed in parallel. The presentation will give an overview about HPF and its features and will inform about the status of the initiative. CV: Clemens-August Thole is member of the HPFF (High Performance Fortran Forum), that developed High Performance Fortran. He organized a group in Europe complementary to the HPFF in the US, and was instrumental in making HPF an international effort. ........................................................................ Session 3.6H Time: 16:00-16:45 Title: MCNP4, a Parallel Monte Carlo Implementation on a Workstation Network Speaker: Frank Schmitz (Kernforschungzentrum Karlsruhe) Abstract: The Monte Carlo code MCNP4 has been implemented on a workstation network to allow parallel computing of Monte Carlo transport processes. This has been achieved by making use of the communication tool PVM (Parallel Virtual Machine) and introducing some changes in the MCNP4 code. The PVM daemons and user libraries have been installed on different workstations to allow working on the same platform. Essential features of PVM and the structure of the parallelized MCNP4 version are discussed in this paper. Experiences are described and problems are explained and solved with the extended version of MCNP. The efficiency of the parallelized MCNP4 is assessed for two realistic sample problems from the field of fusion neutronics. Compared with the fastest workstation in the network, a speed-up factor near five has been obtained by using a network of ten workstations, different in architecture and performance. CV: Frank Schmitz has been working since seven years in the environment of supercomputing and is now looking for tools to use the workstations in the Nuclear Research Center. Vectorizing is done for many problems, now parallelizing will be the next job. ........................................................................ Session 3.7H Time: 16:50-17:35 Title: SP1 compared to a workstation cluster: a user experience Speaker: John Zollweg (Cornell Theory Center) Abstract: Not available CV: John Zollweg is Project Manager for the Strategic Applications Program in Cornell Theory Center. He has had extensive experience with parallel processing, including PVM, both as a researcher who used the Theory Center's platforms for many years (in chemical engineering), and now as a Theory Center staff member coordinating early use by researchers of the strategic platforms. John is now involved in a Joint Verification Study with IBM. He has already ported his own research code to the SP1, and is co-chairing the SP1 Early User Workshop in June. ........................................................................ Other talks related to HPC during SHARE Europe AM93: Session 1.3C Date: October 25 Time: 11:15-12:00 Title: Highly Parallel DB Computing Speaker: Session 2.1D Date: October 26 Time: 8:45-9:45 Title: Is a parallel System/390 in Your Future? Speaker: Scott Loveland (IBM Poughkeepsie, USA) Session 2.2D Date: October 26 Time: 10:15-11:10 Title: Is a parallel System/390 in Your Future? (Session Continued) Speaker: Scott Loveland (IBM Poughkeepsie, USA) Session 2.3D Date: October 26 Time: 11:15-12:10 Title: Parallel Batch Speaker: Tom Monza (IBM Poughkeepsie, USA) Session 2.EI Date: October 26 Time: 11:15-12:00 Title: Inside the Human Brain with Parallel Processing Speaker: Dr. James A. Browm (IBM Santa Teresa, USA) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 5 Oct 93 11:37:27 EDT From: "Duncan A. Buell" Subject: CALL FOR PAPERS: IEEE Workshop on FPGAs for Custom Computing Machines CALL FOR PAPERS: IEEE Workshop on FPGAs for Custom Computing Machines April 10-13, 1994 Sheraton Inn at Napa Valley, Napa, California PURPOSE: To bring together researchers to present recent work in the use of Field Programmable Gate Arrays or other means for obtaining reconfigurable computing elements. This workshop will focus primarily on the current opportunities and problems in this new and evolving technology for computing. A proceedings will be published by the IEEE Computer Society. PAPERS (10 pages maximum) SHOULD BE SUBMITTED BEFORE JANUARY 7, 1994 TO DUNCAN BUELL. This workshop is sponsored by the IEEE Computer Society and the TC on Computer Architecture. CoChairs: Ken Pocek (west coast) Duncan Buell (east coast) Intel Supercomputing Research Center Mailstop RN6-18 17100 Science Drive 2200 Mission College Blvd. Bowie, Maryland 20715 Santa Clara, CA 95052 301-805-7372 408-765-6705 301-805-7602 (fax) 408-765-5165 (fax) duncan@super.org kpocek@sc.intel.com ORGANIZING COMMITTEE: Jeffrey Arnold, SRC Wayne Luk, Oxford Peter Athanas, VPI Jonathan Rose, U Toronto Pak Chan, UC Santa Cruz Herve Touati, DEC Paris Research Tom Kean, Algotronix SOLICITATION:: Papers are solicited on all aspects of the use or applications of FPGAs or other means for obtaining reconfigurable computing elements in attached or special purpose processors or coprocessors, especially including but not limited to *** coprocessor boards for augmenting the instruction set of general purpose computers; *** attached processors for specific purposes (e.g. signal processing) or general purposes; *** languages, compilation techniques, tools, and environments for programming FPGA-based computers; *** application domains suitable for FPGA-based computers; *** architecture emulation using FPGA-based computers. A special session will be organized in which vendors of hardware and software can present new or upcoming products involving FPGAs for computing. Individuals wishing to be put on an email list for further information about this workshop should send email with name, regular mail address, and email address to Ken Pocek. Questions regarding submission of papers should be directed to Duncan Buell. The proceedings of FCCM 93 are available from the IEEE Computer Society, Computer Society Press Order Number 3890-02, ISBN 0-8186-3890-7. A limited number of copies will also be available for sale at FCCM 94. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ikal@cperi.forth.gr (Hlias Kalaitzis) Subject: need extra software for 3LC on T-Node machines Organization: CPERI, Thessaloniki, Hellas Date: Tue, 5 Oct 1993 18:19:55 GMT For people who work on Telmat T-node machines:To make use of the 3LC compiler on a T-Node someone must have the special software for the hardwiring of the machine (T_Config etc.).I would be thankful if someone could show me a way to find this stuff.(It's the same for all the 3L languages actually). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: crispin@csd.uwo.ca (Crispin Cowan) Subject: Terminology question Organization: Department of Computer Science, University of Western Ontario, London Date: Tue, 5 Oct 1993 19:24:10 GMT Message-ID: <1993Oct5.192410.1073@julian.uwo.ca> Sender: news@julian.uwo.ca (USENET News System) I have a terminology question for the masses. Does the term "distributed memory" imply memory that is not shared, i.e. can "distribued memory" and "shared memory" be used as opposing concepts? I had thought I saw the literature moving towards this consensus, but I've recently seen a workshop section labelled "distributed memory" that seeks to include both shared and non-shared memory. Thanks, Crispin ----- Crispin Cowan, CS grad student, University of Western Ontario Phyz-mail: Middlesex College, MC28-C, London, Ontario, N6A 5B7 E-mail: crispin@csd.uwo.ca Voice: 519-661-3342 "If you see a skier, turn. Trees too." Burton rendition of the Skier's^H^H^H^H^H^H^H Snowboarder's Responsibility Code Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.object,comp.parallel From: Jean.Marc.Andreoli@xerox.fr (Jean Marc Andreoli) Subject: CORBA experience Organization: Rank Xerox Research Centre Hi, I am looking for an experience report of a software development using CORBA, or for the code of a complete (toy) application based on CORBA. Any pointer or information would be welcome. Cheers, Jean-Marc Jean-Marc Andreoli | Tel: +33 76 61 50 80 (direct) Rank Xerox Research Center | +33 76 61 50 50 (switchboard) 38240 Meylan (France) | E-mail: Jean.Marc.Andreoli@xerox.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: shamik@cs.umd.edu (Shamik Sharma) Subject: Re:Irregular Data Access Pattern Loops on Parallel Machines Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 >Hi, >I'm looking for references to articles dealing with the problem of >compiling loops with irregular data access patterns on parallel >computers for minimum communication/maximum locality. >Mounir Hahad e-Mail : hahad@irisa.fr Our group at Maryland has been developing runtime toolkits and compiler techniques to handle irregular data access patterns. Related papers can be found at the anonymous ftp site : hyena.cs.umd.edu under the directory pub/papers -shamik Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: johnp%amber.unm.edu@lynx.unm.edu (John Prentice) Subject: Fortran 90 performance benchmarks available Organization: Dept. of Math & Stat, University of New Mexico, Albuquerque Announcing the availability of the Quetzal Fortran 90 Benchmark Suite --------------------------------------------------------------------- The latest version of the Quetzal Computational Associates Fortran 90 compiler benchmark results are now available from anonymous ftp at unmfys.unm.edu in the directory pub/quetzal. The benchmark codes themselves are available as well and so is a review of the VAST-90 utility for converting Fortran 77 codes to Fortran 90. Please note that this ftp site is different from the one used in the past. This version of the benchmark suite differs in many significant ways from the original one that was made available earlier this year. In particular, some codes have been dropped from the suite and several others have been added. The benchmark results have been updated to reflect the performance of newer releases of the compilers on the codes in the new benchmark suite. Please contact me if you have any problems with the anonymous ftp directory or other questions. John -- Dr. John K. Prentice Quetzal Computational Associates 3200 Carlisle N.E., Albuquerque, NM 87110-1664 USA Phone: 505-889-4543 Fax: 505-889-4598 E-mail: quetzal@aip.org Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: gustav@arp.anu.edu.au (Zdzislaw Meglicki) Newsgroups: comp.parallel,comp.parallel.pvm,comp.sys.transputer Subject: Re: Particle In cell codes on parallel machines (QUERY) Organization: Centre for Information Science Research, ANU, Canberra, Australia Sender: gustav@arp.anu.edu.au (Zdzislaw Meglicki) References: <1993Sep27.125715.7618@hubcap.clemson.edu> Nntp-Posting-Host: 150.203.20.14 I have described how to use the Connection Machine in the context of the particle in cell method in a talk I gave to the High Performance Computing Conference at the Australian National University this year. You will find the text of the talk in the "ez" format ("ez" is an editor from the CMU Andrew Toolkit) in our ftp-anonymous area on arp.anu.edu.au in the directory ARP/papers/particles. The talk describes how you would use the "segmented scan" facility in order to parallelise your particle operations even for very non-uniform distributions of particles. It relies quite heavily on what is available on the Connection Machine. Although in principle you could implement a similar data parallel environment on top of the PVM, I don't think that there is anything out there that would get even close to what you have on the CM (sadly). -- Zdzislaw Meglicki, Zdzislaw.Meglicki@cisr.anu.edu.au, Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysSE, The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601, Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: punisher@ccwf.cc.utexas.edu (Judge) Subject: Paralleling on Intel Date: 5 Oct 1993 21:46:56 -0500 Organization: The Punishment Institute Hello all, I am doing my senior thesis on paralleling (massively if possibly) intel [read DOS based] processors. Any help you can give me or steer me to would be a great help!! Thanks! :) Kasey Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sbs@finsun.csc.fi (Sami Saarinen) Subject: Gustafsons speedup Keywords: parallel, speedup Organization: Center for Scientific Computing, Finland (CSC) Sometime ago I read about Gustafson et.al. speedup definition as opposite to Amdahl's law. It was supposed to be an alternative method to define speedup generally on mpp-systems. I completely understand how it is defined, but I don't fully understand its use. My question is: When it is allowed to use Gustafson's speedup? Is it usable only when my problem size is increasing or can I use it also when I get more processors, but my problem size remains constant? Thanks for the answers in advance. Sami Saarinen CSC/Finland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: buffat@europe.mecaflu.ec-lyon.fr (Buffat Marc) Subject: [Q] PVM3 and P4 comparisons Date: 6 Oct 1993 14:50:51 GMT Organization: LMFA Nntp-Posting-Host: europe.mecaflu.ec-lyon.fr Keywords: PVM3 and P4 Does anybody have a comparsion between PVM3 and P4, specially for the message passing protocol. I know that P4 supports the shared memory protocol, but how the message passing implementation of P4 compares with the message passing of pvm. Any comments will be appreciated. Marc ----------------------------------------------------------------------------- Marc BUFFAT ++++++++++++++++++++++++ Lab. Mecanique des fluides LMFA | CNRS URA 263 | ECL, 36 av. Guy de Collongue | ECL Lyon | Ecully 69131, FRANCE | UCB Lyon I | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wuchang@adirondacks.eecs.umich.edu (Wu-chang Feng) Subject: Host Interfaces to Communication Subsystems Organization: University of Michigan EECS Dept., Ann Arbor, MI I'm looking for references on the different structures and representations the host must provide for a communication subsystem in both the parallel and distributed domains. (i.e. the different ways the host can "feed" its subsystem) Thanks Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm From: jdelgado%chensun2m.DrillThrall@usceast.cs.scarolina.edu (J. Delgado) Subject: Fortran parallelizer wanted Sender: usenet@usceast.cs.scarolina.edu (USENET News System) Reply-To: jdelgado%chensun2m.DrillThrall@usceast.cs.scarolina.edu Organization: University of South Carolina Hi there, This may be a naive question, but I am new to parallel computing. Is there a software that will transform a traditional (i.e., sequential) FORTRAN program into a parallel program? I am interested in parallel versions that run under PVM and/or Express. I know that if I want to get the best performance I will have to write the code from scratch. However, I have _lots_ of "traditional" code and it will be painful to rewrite everything again from scratch. I just want something that helps me get started. I have heard of dependence analyzers and parallelizers. Are these useful for what I need? Any information/advice will be greatly appreciated. Please reply directly to me; I will summarize if this is of interest. Thanks. Javier Delgado jdelgado@sun.che.scarolina.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: misc.jobs.offered,comp.parallel,comp.os.mach From: lernst@SSD.intel.com (Linda Ernst) Subject: OS Designers, Intel, Beaverton, Oregon, USA Organization: Supercomputer Systems Division (SSD), Intel The Supercomputing Systems Division of Intel, has positions available now in Beaverton, Oregon for Senior Software Engineers, Operating Systems. We are a leading supplier of massively parallel supercomputers, which run a fully distributed version of OSF1/AD (Mach microkernel, Unix server) on 1000+ nodes, producing 100s of gigaFLOPS and terabytes of data. Not for the faint of heart :-) Job descriptions are attached. Please mail resumes (please ABSOLUTELY no FAXes, no phone calls, no e-mail): Linda Ernst c/o Intel Corporation Mail Stop CO1-01 5200 N.E. Elam-Young Parkway Hillsboro, OR 97124-6497 =============================================================================== Position #1: Operating System Designer, Memory Management Description: Specify, design, prototype and implement an advanced distributed memory management architecture in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Includes collaboration with with internal and academic applications researchers to provide high performance operating system support for new parallel programming models. Education and Skills: Minimum BSCS, Masters preferred, 6 to 10 years programming experience, 3 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer and scalable operating system experience a plus, experience in the areas of supercomputing a plus. Design experience with memory management required. =============================================================================== Position #2: Operating System Designer, Message Passing Description: Design, prototype and implement message passing-related features, and new message passing protocols, in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Education and Skills: Minimum BSCS, Masters preferred, 5 to 8 years programming experience, 2 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer operating system experience a plus, experience in the areas of supercomputing a plus. Experience with message passing highly desirable. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 06 Oct 1993 19:11:16 +0000 (GMT) From: due380@herald.usask.ca (Donald Uwemedimo Ekong) Subject: Req. for code for simul. of stuck-at fault using C++ Organization: University of Saskatchewan Hello, Please I am looking for information on how to simulate stuck-at faults using C++ . For example, if I have int t ; and I want the value of t to be stuck-at e.g. 4 , even if I write the expression t = 8 + 2 ; How can I make t to always be 4 ? Thanks Donald Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.arch,comp.benchmarks From: flynn@poly.edu (Robert Flynn) Subject: Locate ftp site info on machines? Reply-To: flynn@ranch.poly.edu b.shriver@computer.org Keywords: reports, parallel, distributed Organization: Polytechnic University, New York Date: Wed, 6 Oct 1993 15:34:45 GMT We would like to access, via ftp, technical reports, specifications, etc that manufacturers of parallel and distributed systems make available regarding their hardware and software systems. The vendors that we are interested in include (but is not limited to): nCube TMC Convex MasPar and DEC/MasPar Cray NEC Hitachi ATT/NCR/Teradata KSR etc. and the clustered workstation approaches. e.g. IBM Sun HP DEC etc. A list of ftp sites, directories, file names etc. for each manufacturer would be appreciated. A brief abstract would also be appreciated. We'll compose a table to distribute to the newsgroup. We're interested in architectural specifications, processor descriptions, systems descriptions, database management systems, transaction processing systems, open systems and interoperability issues, migration, client server systems and the like on these high performance systems. Thanks Bob Flynn flynn@ranch.poly.edu Bruce Shriver b.shriver@computer.org Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mbergerm+@CS.CMU.EDU (Marcel Bergerman) Subject: References on parallel processors Organization: Carnegie Mellon University Dear netters, I would like to know of references on the better allocation of processors for large, numerical-intensive parallel applications. The same sequential algorithm can have several parallel implementations and it would be good to have some criteria to decide which one is the best before even beginning to program. Send all answers to me and I'll summarize them to the group. Thank you, Marcel Bergerman mbergerm@cs.cmu.edu Carnegie Mellon University p.s.: this is actually for a friend who does not have access to this bboard. I hope this is not a FAQ or something very dull. Thanks! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - October 1993 meeting notice Organization: NETCOM On-line Communication Services (408 241-9760 guest) [Information concerning the PARALLEL Processing Connection is available on parlib ] Multiprocessing With SunOS On October 11th, Steve Klieman of Sun Microsystems will discuss the architecture and implementation of user accessible threads in Solaris. Following that he will describe the architecture of the fully- preemptible, realtime, Symmetric Multiprocessing kernel that supports the user threads implementation. We will certainly want to explore how Solaris could be adapted to a workstation cluster implemented with the Scalable Coherent Interface. And we will be interested in knowing whether Solaris supports Sun's S3mp version of Distributed Shared Memory. A discussion of member entrepreneurial projects currently underway will begin at 7:15PM and the main meeting will take place at 7:45 PM. Location is Sun Microsystems at 901 South San Antonio Road in Palo Alto, California. Southbound travelers exit 101 at San Antonio; northbound attendees also exit at San Antonio and take the overpass to the other side of 101. There is an $8 visitor fee for non- members and members ($40 per year) will be admitted free. Please be prompt; we expect a large attendance; don't be left out or left standing. For further information contact: -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wakatani@cse.ogi.edu (Akiyoshi Wakatani) Subject: Re: Irregular Data Access Pattern Loops on Parallel Machines References: <93-10-031@comp.compilers> >> Hi, >> I'm looking for references to articles dealing with the problem of >> compiling loops with irregular data access patterns on parallel >> computers for minimum communication/maximum locality. The following articles will help you. @INPROCEEDINGS{koelbel.90, title = "Parallel Loops on Distributed Machines", booktitle = "{DMCC-5}", year = 1990, author = "Charles Koelbel and Piyash Mehrotra and Joel Salts and Harry Berryman", } @ARTICLE{Koelbel_Mehrotra.91, title = "Compiling {G}lobal {N}ame-{S}pace {P}arallel {L}oop for {D}istributed {E}xecution", journal = "IEEE trans. on Parallel and Distributed Systems", volume = 2, number = 4, year = 1991, author = "Charles Koelbel and Piyush Mehrotra", } @INPROCEEDINGS{Saltz.93, title = "Slicing Analysis and Indirect Accesses to Distributed Arrays", booktitle = "Sixth Annual Workshop on Languages and Compilers for Parallel Computing", year = 1993, author = "Raja Das and Joel Saltz and Reinhard Van Hanxleden", } Akiyoshi Wakatani Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: John Pritchard Newsgroups: comp.parallel,cs.research,comp.theory,comp.research.japan Subject: Request for info/persons : Soft Logic Organization: Columbia University Department of Computer Science Hello, I am interested in persuing Soft Logic ( != fuzzy l.). Please reply . . . directly! thanks, john -- ugrad, dept of economics, jdp10@columbia.edu research assistant, jdp@cs.columbia.edu 506 W 113, 1A, NY, NY 10025; 212.663.4118 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: milland@iesd.auc.dk (Lars Milland) Subject: Distributed Shared Memory Organization: Mathematics and Computer Science, Aalborg University We are a group of students at the department of Math and CS at Aalborg University developing a distributed shared memory system and we have run into some problems. We are looking for a formal definition or a informal description of different kinds of memory consistency/coherence in particular sequential and processor consistency. Please E-mail any answers to milland@iesd.auc.dk or dubois@iesd.auc.dk Thanks in advance. -- MM MM OOOOO L L EEEEEE MMM MMM O O O L L E \\ M MM M O O O L L EEE ====>> milland@iesd.auc.dk M MM M O O O L L E // Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wittmann@Informatik.TU-Muenchen.DE (Marion Wittmann) Subject: algorithm classification Organization: Technische Universitaet Muenchen, Germany I'm trying to classify parallel algorithms. Especially I'm interested in their characteristial SVM-properties. Therefor I need some literature about application schemes and classification of algorithms, not only of parallel ones. If you know any literature dealing with this subject, please mail wittmann@informatik.tu-muenchen.de Thanks for your help Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Thu, 7 Oct 93 11:28:33 EDT From: sumit@galileo.eng.wayne.edu (DPR) Subject: MasPar MP 1 problems I am just learning how to use a MasPar MP 1 machine. Could anyone send me some programs in 'C' and/or MPL which can be executed under the MPPE ? I also have a doubt about a system error message: problem LICENSE FAILURE: 5 -- no such feature exists As a result of this I cannot execute any program under the MPPE... Sumit Roy Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: arumilli@unlinfo.unl.edu (subbarao arumilli) Subject: CRAY MPP Organization: University of Nebraska--Lincoln Dear netreaders, can anyone forward me the information regarding the new cray massively parallel processing system. Any information is appreciated. thanks subbarao subbarao@engrs.unl.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.software-eng,comp.parallel,comp.dsp,comp.realtime,comp.robotics From: srctran@world.std.com (Gregory Aharonian) Subject: New patent titles and numbers emailservice now being offered Organization: The World Public Access UNIX, Brookline, MA FREE NEW PATENT TITLES MAILING LIST VIA EMAIL Starting in late October, I will be offering a free service to anyone who can receive email from the Internet. The service will be a weekly mailing of all of the patents issued by the patent office during the last week (or more specifically, all of the patents listed in the most recent issue of the USPTO Patent Gazette). For each patent, the patent title and number will be listed. The mailing will consist of three files, each on the average totalling 50K of ASCII text - one file with the mechanical patents, one file with the chemical patents and one file with the electronic patents. I have attached some sample listings to the end of this message. With each file there will be provided some information on how to order paper and electronic versions of patents. If you can receive and UNZIP a UUENCODED file, those files will be about 25K in size, for those with smaller mail buffers. If this catches on, I will probably convert this over to a new USENET group, something like comp.patents.new or the equivalent. Also, I will be setting up an anonymous ftp site to store the files being mailed out, as well as back files. Until then, I will be running only the mailing service. SERVICE IS FREE - FILES UNCOPYRIGHTED The service will be free, and the files are not copyrighted, so you are free to do whatever you want with the information. The mailing-list is open to anyone on the planet. If this service is of help, a voluntary registration fee will be appreciated. Given sufficient contributions, I will be able to post to the Internet and mailing list other patent information, in particular more information on US patents, such as patent classification and patent assignee, and brief information on foreign patents (in particular, Japanese and British). My goal in the long run is to help coordinate hooking up the Patent Office's APS system to the Internet, and attach to the Internet CDROM drives with the CDROMs sold by the Patent Office. Occasionally I will mail out to the mailing list announcements from the Patent Office and other tidbits of patent information. This will tend to be short messages. I am not affliated with the Patent Office in anyway, nor am I am patent lawyer. TO RECEIVE THE NEW PATENT LISTINGS FILES If you are interested in receiving these files, please send to me your name and postal and email addresses, the words MECHANICAL, CHEMICAL and/or ELECTRONIC (depending which groups you care to receive), and the words ASCII or UUZIP (depending on which format you want - UUZIP means you can receive and UNZIP a UUDECODED file). If you want to receive patent news information (PTO announcements, lawsuit outcomes), send the word NEWS. If you want to receive all of the files, send the word ALL. Also, if you don't mind, please include some information on what you do and how you might use this patent information. Forward your requests to: patents-request@world.std.com Please pass the word, especially to those not on the Internet or that don't read USENET. Patent information is very valuable for finding out what others are doing, for locating new technologies to license, and to measure rates of progress in other fields. If you have any questions, please contact me at patents@world.std.com and I'll get back. Gregory Aharonian 617-489-3727 Source Translation & Optimization P.O. Box 404 Belmont, MA 02178 patents@world.std.com SAMPLE LIST OF PATENTS AS DISTRIBUTED TO MAILING LIST 5177809 Optical cable having a plurality of light waveguides 5177808 Optical energy beam transmission path structure 5177807 Device for the alignment of an optical fiber and an optoelectronic component 5177806 Optical fiber feedthrough 5177805 Optical sensors utilizing multiple reflection 5177804 Waveguide-type optical switch 5177803 Coaxial optical fiber coupler transmitter-receiver apparatus and method of making same 5177802 Fingerprint input apparatus 5177801 Cross fader for editing audio signals 5177800 Bar code activated speech synthesizer teaching device 5177799 Speech encoder 5177798 Sound reproducer for high definition television 5177797 Block transformation coding and decoding system with offset block division -- ************************************************************************** Greg Aharonian srctran@world.std.com Source Translation & Optimization 617-489-3727 P.O. Box 404, Belmont, MA 02178 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.lang.functional,comp.parallel From: phammond@cs.tcd.ie (Paul Hammond) Subject: Parallel Functional Architectures Summary: What's the current state of the art ? Keywords: Parallel, architecture, Sender: phammond@cs.tcd.ie Organization: Trinity College, Dublin (Computer Science) Can anybody tell me what's the current state of the art in parallel functional *architectures* at the moment ? What are the most successful systems out there in terms of scalability and speedup ? I have many references to systems but I would like to narrow that search to the more successful ones and any that might have come to life in recent times which my references mightn't cover. TIA Paul -- --------------------------------------------------------------------------- | Paul Hammond E-mail : phammond@dsg.cs.tcd.ie Phone : +353-1-7022354 | | Computer Science Dept, Trinity College Dublin, Ireland | --------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: furnari@sp61.csrd.uiuc.edu (Mario Furnari) Subject: Call For Paper Int. Workshop on Massive Parallelism 94 Message-ID: <1993Oct7.165731.27431@csrd.uiuc.edu> Keywords: parallel, CFP IWMP-94 Organization: Univ of Illinois, Center for Supercomputing R&D, Urbana, IL References: <93-10-031@comp.compilers> Here enclosed the LaTeX version of the first announcement of the 2nd International Workshop on Massivw Parallelism to be hel in Capri (italy) in the October 3-7 1994. Please post ot all the interested people Mario Mango Furnari Istituto di Cibernetica mf@arco.na.cnr.it ========cut here=================================== \documentstyle[10pt]{article} % HORIZONTAL MARGINS % Left margin 1 inch (0 + 1) \setlength{\oddsidemargin}{0in} % Text width 6.5 inch (so right margin 1 inch). \setlength{\textwidth}{165mm} \setlength{\textheight}{210mm} % ---------------- % VERTICAL MARGINS % Top margin 0.5 inch (-0.5 + 1) \setlength{\topmargin}{+0 in} % Head height 0.25 inch (where page headers go) %\setlength{\headheight}{0.25in} % Head separation 0.25 inch (between header and top line of text) %\setlength{\headsep}{0.25in} % Text height 9 inch (so bottom margin 1 in) %\setlength{\textheight}{8.0in} %\newlength{\illuswidth} %\setlength{\illuswidth}{\textwidth} %\addtolength{\illuswidth}{-12mm} %\setlength{\oddsidemargin}{0in} \begin{document} \begin{center} {\bf {\Large $2^{nd}$ International Workshop} \\ on \\ {\Large Massive Parallelism: Hardware, Software and Applications} \\ October 3-7 1994 \\ } \end{center} \vspace{10mm} \begin{center} {\bf Organized by:} Istituto di Cibernetica (Naples, Italy) \\ \vspace{3mm} \noindent in cooperation with \vspace{3mm} Department of Computer Architecture (Barcelona, Spain) \\ Department of Computer Science (Patras, Grece) \\ Center for Supercomputing Research \& Development (Urbana Champagin, U.S.A) \end{center} The {\bf $2^{nd}$ International Workshop on Massive Parallelism: Hardware, Software, and Application} is sponsored by the {\em Progetto Finalizzato Calcolo Parallelo e Sistemi Informativi} which was established by the Italian {\em Consiglio Nazionale delle Ricerche} to advance knowledge in all areas of parallel processing and related technologies. In addition to technical sessions of submitted paper presentations, MP '94 will offer tutorials, a parallel systems fair, and commercial exhibits. \vspace{3mm} {\bf Call For Papers:} Authors are invited to submit manuscripts that demonstrate original unpublished research in all areas of massive parallel processing including development of experimental or commercial systems. Topics of interest include: \vspace{3mm} \begin{center} \begin{tabular}{ll} Parallel Algorithms & Parallel Architectures \\ Parallel Languages & Programming Environments \\ Parallelizing Compilers & Performance Modeling/Evaluation \\ Signal \& Image Processing Systems & Operating Systems \\ Other Applications areas & \\ \end{tabular} \end{center} \vspace{3mm} To submit an original research paper, send five (hard) copies of your complete manuscript (not to exceed 15 single-spaced pages of text using point size 12 type on 8 1/2 X 11 inch pages) to the Workshop Secretariat. References, figures, tables, etc. may be included in addition to the fifteen pages of text. Please include your postal address, e-mail address, telephone and fax numbers. All manuscripts will be reviewed. Manuscripts must be received by {\bf February 1, 1994}. Submissions received after the due date or exceeding the length limit may be returned and not considered. Notification of review decisions will be mailed by {\bf April 31, 1994}. Camera-ready papers are due {\bf May 31, 1994}. Proceedings will be available at the Symposium. Electronic submissions will be considered only if they are in \LaTeX or MS-Word 5 for Macintosh. \vspace{3mm} {\bf Tutorials:} Proposals are solicited for organizing full or half-day tutorials to be held on during the Symposium. Interested individuals should submit a proposal by {\bf January 15, 1994} to the Tutorials Chair. It should include a brief description of the intended audience, a lecture outline and vita lecturer(s). \vspace{3mm} {\bf Parallel Systems Fair:} This all day event will include presentations by researchers who have parallel machines under development, as well as by representatives of companies with products of interest to the Massively Parallel Processing community. A presentation summary should be submitted to the Parallel Systems Chair by January 15, 1994. \newpage \begin{center} {\large \bf MP '94 Organization:} \end{center} \vspace{3mm} \begin{center}{\large \bf Workshop Program Committee:}\end{center} {\small \begin{tabular}{llll} Arvind (USA) & E. Ayguade (Spain) & R. Bisiani (Italy) & R. Halstead (U.S.A.) \\ W. Jalby (France) & J. Labarta (Spain) & M. Mango Furnari (Italy) & A. Nicolau (USA) \\ D. Padua (USA) & R. Perrot (U.K.) & C. Polychronopoulos (USA) & T. Papatheodoru (Greece) \\ B. Smith (U.S.A.) & M. Valero (Spain) & R. Vaccaro (Italy) & E. Zapata (Spain) \end{tabular}} \vspace{3mm} \begin{center}{\large \bf Organizing Committee:} \vspace{3mm} {\small \begin{tabular}{ll} M. Mango Furnari (Italy) & T. Papatheodoru (Greece) \\ R. Napolitano (Italy) & C. Di Napoli(Italy) \\ E. Ayguade (Spain) & D. Padua (U.S.A.) \\ \end{tabular}} \end{center} \vspace{10mm} \begin{minipage}[t]{70mm} {\large \bf Tutorials Chair:}\\ Prof. C. Polychronopoulos \\ CSRD, University of Illinois \\ 1308 West Main St. \\ Urbana Champaign \\ IL 61801-2307 U.S.A. \\ Ph.: (+1) (217) 244-4144 \\ Fax: (+1) (217) 244-1351 \\ Internet: cdp@csrd.uiuc.edu \\ \end{minipage} \ \ \begin{minipage}[t]{70mm} {\large \bf Parallel Systems Fair Chair:}\\ Prof. A. Massarotti \\ Istituto di Cibernetica \\ Via Toiano, 6 \\ I-80072 - Arco Felice (Naples) \\ Italy \\ Phone: +39-81-853-4126 \\ Fax: +39-81-526-7654 \\ E-mail: massarotti@cib.na.cnr.it \end{minipage} \vspace{5mm} \begin{minipage}[tch]{70mm} {\large \bf Symposium Chair:} \\ Mario Mango Furnari \\ Istituto di Cibernetica \\ Via Toiano, 6 \\ I-80072 - Arco Felice (Naples, Italy) \\ Phone: +39-81-853-4229 \\ Fax: +39-81-526-7654 \\ E-mail: furnari@cib.na.cnr.it \end{minipage} \ \ \begin{minipage}[tch]{70mm} {\large \bf Secretariat:} \\ A. Mazzarella, C. Di Napoli \\ Istituto di Cibernetica \\ Via Toiano, 6 \\ I-80072 - Arco Felice (Naples, Italy) \\ Phone: +39-81-853-4123 \\ Fax: +39-81-526-7654 \\ E-mail: secyann@cib.na.cnr.it \end{minipage} \vspace{10mm} \begin{center} \framebox{ \begin{tabular}{ll} {\large Paper Submission Deadline:} & {\large January 8, 1994} \\ {\large Tutorial proposals due:} & {\large January 15, 1994} \\ {\large Systems fair presentation due:} & {\large January 15, 1994} \\ {\large Acceptance letter sent:} & {\large April 31, 1994} \\ {\large Camera ready copies due:} & {\large May 31, 1994} \end{tabular}} \end{center} \end{document} Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Announce@PARK.BU.EDU Subject: Graduate study in Cognitive and Neural Systems at Boston University *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll@cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: chkim@pollux.usc.edu (Chinhyun Kim) Subject: Portable Parallel Languages Organization: University of Southern California, Los Angeles, CA I am looking for some references on portable explicit parallel programming approaches. Specifically, I'm interested in knowing such things as the intended areas of application (numeric, symbolic or general purpose), execution model, portability, currently supported machine platforms and performance. But, I'll gratefully take any info. that comes my way. :-) Thanks in advance, -- Chinhyun Kim Dept. EE-Systems Univ. of Southern California Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: morgan@unix.sri.com (Morgan Kaufmann Publishers) Newsgroups: comp.arch,comp.ai,comp.ai.neural-nets,comp.parallel Subject: Publication Announcement for PARALLEL PROCESSING by Moldovan Organization: SRI International, Menlo Park, CA Announcing A New Publication from Morgan Kaufmann Publishers PARALLEL PROCESSING FROM APPLICATIONS TO SYSTEMS by Dan I. Moldovan (University of Southern California) ISBN 1-55860-254-2; 567 pages; cloth; $59.95 U.S. / International prices will vary. This text provides one of the broadest presentations of parallel processing available, including the structure of parallel processors and parallel algorithms. The emphasis is on mapping algorithms to highly parallel computers, with extensive coverage of array and multiprocessor architectures. Early chapters provide insightful coverage on the analysis of parallel algorithms and program transformations, effectively integrating a variety of material previously scattered throughout the literature. Theory and practice are well-balanced across diverse topics in this concise presentation. For exceptional clarity and comprehension, the author presents complex material in geometric graphs as well as algebraic notation. Each chapter includes well-chosen examples, tables summarizing related key concepts and definitions, and a broad range of worked exercises. Features: Overview of common hardware and theoretical models, including algorithm characteristics and impediments to fast performance Analysis of data dependencies and inherent parallelism through program examples, building from simple to complex Graphic and explanatory coverage of program transformations Easy-to-follow presentation of parallel processor structures and interconnection networks, including parallelizing and restructuring compilers Parallel synchronization methods and types of parallel operating systems Detailed descriptions of hypercube systems Specialized chapters on dataflow and on AI architectures TABLE OF CONTENTS: 1. Introduction 1.1 Parallelism as a Concept 1.1.1 Models of Parallel Computations 1.1.2 Levels of Parallelism 1.2 Applications of Parallel Processing 1.3 Relation between Parallel Algorithms and Architectures 1.4 Performance of Parallel Computations 1.4.1 Need for Performance Evaluation 1.4.2 Performance Indices of Parallel Computation 1.4.3 Striving Toward Teraflops Performance 1.4.4 Mathematical Models 1.4.5 Performance Measurement and Analysis 1.5 Main Issues for Future research in Parallel Processing 1.5.1 Understand the Influence of Technology on Parallel Computer Designs 1.5.2 Develop Models for Large Parallel Computer Systems 1.5.3 Define the Fundamental Parallel Architectures 1.5.4 Develop a System Level Design Theory 1.5.5 Develop Theory and Practices for Designing Parallel Algorithms 1.5.6 Develop Techniques for Mapping Algorithms and Programs into Architectures 1.5.7 Develop Languages Specific to Parallel Processing 1.5.8 Develop Parallel Compilers for Commonly Used Languages 1.5.9 Develop the Means to Evaluate Performance of Parallel Computer Systems 1.5.10 Develop Taxonomies for Parallel Processing Systems 1.5.11 Develop Techniques for Parallel Knowledge Processing 1.6 Bibliographical Notes and Further Reading 1.7 Problems 2 Analysis of Parallelism in Computer Algorithms 2.1 Data and Control Dependencies 2.2 Parallel Numerical Algorithms 2.2.1 Algorithms without Loops 2.2.2 Matrix Multiplication 2.2.3 Relaxation 2.2.4 Recurrence Relations 2.2.5 QR Algorithm 2.3 Parallel Non-Numerical Algorithms 2.3.1 Transitive Closure 2.3.2 Dynamic Programming 2.3.3 Optimal Binary Search Trees 2.3.4 Subgraph Isomorphism 2.3.5 Parallel Sorting 2.4 Bibliographical Notes and Further Reading 2.5 Problems 3 Program Transformations 3.1 Removal of Output Dependencies and Anti-dependencies 3.2 Programs with Loops 3.2.1 Forms of Parallel Loops 3.2.2 Loop Transformations 3.3 Transformation of Index sets and Dependencies 3.3.1 The Basic Idea 3.3.2 Linear Transformations 3.4 Optimal Time Transformations 3.4.1 Parallel Computation Time 3.4.2 Selection of Optimal Time Transformation 3.5 Nonlinear Transformations 3.6 Bibliographical Notes and Further Reading 3.7 Problems 4 Array Processors 4.1 Single-Instruction Multiple-Data (SIMD) Computers 4.1.1 Local-Memory SIMD Model 4.1.2 Shared-Memory SIMD 4.1.3 Three-Dimensional SIMD Model 4.2 Interconnection Networks for SIMD Computers 4.2.1 Permutation Functions 4.2.2 Single-Stage Networks 4.2.3 Multistage Networks 4.3 SIMD Supercomputers 4.3.1 The Connection Machine 4.3.2 The Hughes 3-D Computer 4.4 Systolic Array Processors 4.4.1 Principles of Systolic Processing 4.4.2 Warp and iWarp 4.5 Associative Processing 4.5.1 The Structure of an Associative Memory 4.5.2 Algorithms 4.5.3 Associative Array Processors 4.6 Bibliographical Notes and Further Reading 4.7 Problems 5 Mapping Algorithms into Array Processors 5.1 Mapping of Algorithms into Systolic Arrays 5.1.1 Systolic Array Model 5.1.2 Space Transformations 5.1.3 Design Parameters 5.2 Algorithms Partitioning for Fixed-Size Systolic Arrays 5.2.1 The Partitioning Problem 5.2.2 Examples of Algorithm Partitioning 5.2.3 Partitioning Methodology 5.3 Mapping of Algorithms into SIMD Processors 5.3.1 Remapping Transformations 5.3.2 Design Tradeoffs Using Transformations 5.3.3 Relation Between Logical Transfers and Physical Transfers 5.4 Mapping of Algorithms into Mesh-Connected Networks 5.4.1 Mapping techniques 5.4.2 Mapping of Algorithms with the Perfect-Shuffle Permutation 5.5 Bibliographical Notes and Further Reading 5.6 Problems 6 Multiprocessor Systems 6.1 Multiprocessor Organization and Operating Principle 6.1.1 Shared-Memory Systems 6.1.2 Message-Passing Systems 6.1.3 Primary Issues in Multiprocessing Systems 6.2 Multiprocessor Interconnection Networks and Memories 6.2.1 Interconnection Organizations 6.2.2 Network Characteristics 6.2.3 NYU Enhanced Omega Network 6.2.4 Multiprocessor Memories 6.3 Mapping Algorithms into Multiprocessors 6.3.1 Parallelism Detection 6.3.2 Partitioning 6.3.3 Scheduling 6.4 Operating System for Multiprocessors 6.4.1 Operating System Functions 6.4.2 Synchronization 6.4.3 The MACH Operating System 6.4.4 Multiprocessor Operating System Organization 6.5 The Cedar Multiprocessor 6.5.1 Architecture 6.5.2 Software 6.6 Hypercube Computers 6.6.1 Hypercube Topology 6.6.2 Design Issues 6.6.3 From Hypercubes to Touchstones 6.7 Bibliographical Notes and Further Reading Problems 6.8 Problems 7 Data-Flow Computing 7.1 Data and Demand-Driven Models of Computation 7.1.1 Basic Models 7.1.2 Data-Flow graphs 7.2 Static Data-Flow Computers 7.3 Dynamic Data-Flow Computers 7.3.1 The Tagged-Token Principle 7.3.2 The Manchester Data-Flow Computer 7.3.3 The SIGMA-1 Data-Flow Computer 7.4 Combining Data Flow and Control Flow 7.4.1 Hybrid Data-Flow Computers 7.5 Bibliographical Notes and Further Reading 7.6 Problems 8 Parallel Processing of Rule-Based Systems and Semantic Networks 8.1 Parallelism Analysis in Rule-Based Systems 8.1.1 Rule-Based Systems 8.1.2 Parallelism in the Match Phase 8.1.3 Rule Interdependencies 8.1.4 Search Space Reduction 8.2 Multiple-Rule Firing 8.2.1 Compatibility and Convergence 8.2.2 Multiple-Rule Firing Models 8.2.3 Mapping RBS into Multiprocessors 8.3 Knowledge Representation and Reasoning Using Semantic Networks 8.3.1 Semantic Networks 8.3.2 Marker/Value Propagation Model 8.3.3 Reasoning on Semantic Networks 8.4 Parallel Natural Language Processing 8.4.1 Memory-Based Parsing 8.4.2 Parallel Linguistic Processing 8.5 Semantic Network Array Processor 8.5.1 Conceptual SNAP Architecture 8.5.2 Marker Processing on a SNAP 8.5.3 Examples of Knowledge Processing on a SNAP 8.6 Bibliographical Notes and Further Reading 8.7 Problems OTHER TITLES OF INTEREST FROM MORGAN KAUFMANN Computer Architecture: A Quantitative Approach John L. Hennessy (Stanford) and David A. Patterson (UC Berkeley) Parallel Algorithms and Architectures: Arrays, Trees, and Hypercubes F. Thomson Leighton (MIT) Parallel Computing Works! Geoffrey C. Fox (Syracuse), Roy D. Williams (Caltech), and Paul C. Messina (Caltech) ORDERING INFORMATION: Orders may be placed by: U.S. Canada Phone: (800) 745-7323 (outside U.S.& Canada (415) 578-9911) Fax: (415) 578-0672 E-mail: morgan@unix.sri.com Mail: 2929 Campus Dr., #260 San Mateo, CA 94403 USA Europe/UK Phone: (0273) 748427 Fax: (0273) 722180 Mail: 27 Church Road, Hove, East Sussex, BN3 2FA, England Australia Phone: (02)566-4400 Fax: (02) 566-4411 Mail: Locked Bag 2, Annandale P.O., NSW 2038, Australia All other countries please contact our U.S. office. American Express, Master Card, VISA and Personal Checks drawn on U.S. banks accepted for payment. Shipping: In the U.S. and Canada, please add $3.50 for the first book and $2.50 for each additional book for surface shipping. For International shipping please add $6.50 for the first book and $3.50 for each additional book. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: snoo@uni-paderborn.de (Holger Naundorf) Subject: Q:virtual shared memory Organization: Uni-GH Paderborn, Germany Does anyone no a public domain program that simulates a shared memory maschine on a network of Sun workstations? Thanks in advance, Holger Naundorf snoo@uni-paderborn.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ees2cj@ee.surrey.ac.uk (Chris R Jesshope) Subject: Euro-Par Conference Site Bids Organization: University of Surrey, Guildford, England As you may be aware, The European Conferences CONPAR/VAPP and PARLE will be merging to give a new series of European Conferences called EURO-PAR. EURO-PAR will be held annually at a site in Europe and the first conference will be held in 1995. The Steering Committee of the EURO-PAR Conference will be considering the selection of sites at its meeting on 12 November. Anyone wishing to be considered should prepare a brief bid covering the points listed in the document attached. The deadline for bids is Friday 5th November. All bids should be address (by email) to me, Chris Jesshope Chairman Euro-Par Steering Committee C.Jesshope@ee.surrey.ac.uk encl........................................................ EURO-PAR site-procedures for organizing a conference The following items give a framework for preparing a bid to host the Euro-Par Conference. The Steering Committee will take these into account when selecting a site Sessions and rooms ================== A more or less comfortable ambience is necessary. Public institutions with lecture halls for free would keep conference fees low. Larger hotels offer mostly better conference rooms etc. They should be preferred if costs are moderate. In that case the costs should be demonstrated by the local organizers. In any case costs of three or for hotel options should be given including typical distances to the conference site. Expected Pariticipants ====================== (typical values) -The number of participants should be limited to 400 persons if no special information on more persons is available. -A minimum of 200 persons should be assumed for typical estimates of conference fees. -If less than 150 persons are expected the conference should be cancelled. Rooms Required ============== -The main hall must host a minimum of 400 persons (plenum). -There should be possible a maximum of 4 parallel sessions: 4 rooms for 150 persons each (minimum 100 persons) are needed. -There should be 2-4 rooms for meetings of special interest groups (for about 20-50 persons each). -There should be areas for: coffee breaks, poster exhibition, book exhibition, systems exhibition (mainly network connections via workstations to remote systems) Technical installations of lecture rooms ======================================== -Each lecture room must have two overhead projectors, a microphone+loadspeaker installation, a room lightdimming system. -Further devices are helpful and should be listed: blackboard, diaprojectors, videoprojectors, large screen projectors, computeraided videosystems. Lunch times =========== -There should be restaurants at a maximum walk distance of 10 minutes with a total capacity of 5 times the estimated number of conference participants. Conference dinner ================= -In order to bring together the conference participants a comfortable restaurant with a pleasant atmosphere should be taken into account. Lodgings ======== -Hotels, pensions, etc should have a capacity of 20 times the number of expected conference participants if no special offer and reservation is available. A letting agency should do the job. Travel connections ================== -A map of the conference location should be given. Car parking and typical airport-, train-, bus-, and highway-connections should be evident and satisfactory. If some of them are not satisfactory the organizers should propose further support. Special circumstances of conference site ======================================== -This may concern local industry, local expertise or even special interests, such as festivals etc. although the latter may put pressure on accomodation. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: References for M-Machine From: "Aggeliki Balou (+44-71-387-7050 ext.3721)" Dear all, I would be grateful if somebody could provide me with reference(s) for the M-Machine (if there are ftp-able papers/ reports it would be even better!). This is a massively parallel machine, developed at MIT, which is a later version of the J-Machine, intended mainly for AI applications (I believe) and supports fine-grained computation based on Actors. Many thanks in advance Aggeliki +--------------------------+-------------------------------------------------+ |Aggeliki Balou | JANET:aggeliki@uk.ac.ucl.cs | |Dept. of Computer Science | BITNET:aggeliki%uk.ac.ucl.cs@UKACRL | |University College London |Internet:aggeliki%cs.ucl.ac.uk@nsfnet-relay.ac.uk| |Gower Street | ARPANet:aggeliki@cs.ucl.ac.uk | |London WC1E 6BT | UUCP:...!mcvax!ukc!ucl-cs!aggeliki | +--------------------------+-------------------------+-----------------------+ | Tel:(071)-387-7050 x3721 | Fax: (071)-387-1397 | Telex: 28722 | +--------------------------+-------------------------+-----------------------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: packer@fermi.gsfc.nasa.gov (Charles Packer) Newsgroups: sci.med,misc.headlines,comp.parallel Subject: Gelernter (Yale bombing victim) eye injury Date: 8 Oct 1993 12:02:41 GMT Organization: Dept. of Independence In Thursday's NY Times there is a long article, with photo, about David Gelernter, the Yale computer scientist who was wounded by a bomb on June 22. Among other injuries, he was blinded in one eye, according to the story. The photo of him, with bandaged hand but unblemished face, gives no clue as to the extent of his eye injury. He expects to have surgery which "may help restore" his vision. What might that surgery be? Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stratton@dcs.warwick.ac.uk (Andrew Stratton) Subject: Help! Need `twelve ways to fool the masses'. Message-ID: <1993Oct8.120719.13625@dcs.warwick.ac.uk> Organization: Department of Computer Science, Warwick University, England I can't find the document above - which tells how to fool people when quoting parallel computer performance. I fairly certain that it is available somwhere on an ftp site - but I can't find it with archie. Please could someone let me know where to get it from - ftp or paper reference is fine. Thanks in advance. Andy Stratton. P.S. Please reply by email - I will post if requested. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super,misc.jobs.offered,misc.jobs.offered.entry,sci.math.num-analysis From: wdj@c3serve.c3.lanl.gov (Wayne D Joubert) Subject: GRA Positions in Parallel Computation / Numerical Analysis Organization: Los Alamos National Laboratory Graduate Student Research Assistants Los Alamos National Laboratory (PARALLEL COMPUTATION AND NUMERICAL ANALYSIS) The Computer Research and Applications Group at Los Alamos National Laboratory is currently seeking highly motivated graduate students to participate in the Graduate Research Assistant program. Students with experience in any or all of the following categories are encouraged to apply: Parallel Computer Programming Numerical Linear Algebra Software Library Development Experience with Fortran, C and assembly languages on parallel machines such as the Connection Machine CM-2/200, CM-5, Intel iPSC/860, Paragon and workstation clusters is desirable. A minimum GPA of 2.5 is required. Appointments can range from 3 to 12 months in duration. Interested individuals are encouraged to contact Wayne Joubert for more information: Wayne Joubert Los Alamos National Laboratory Group C-3, MS B-265 Los Alamos, NM 87545 EMAIL: wdj@lanl.gov Los Alamos is an equal-opportunity employer. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dduke@ibm1.scri.fsu.edu (Dennis Duke) Subject: Second Announcement - Cluster Computing '93 -- CLUSTER WORKSHOP '93 Second Announcement NOVEMBER 1, 1993 DEADLINE FOR ABSTRACTS Supercomputer Computations Research Institute Florida State University Tallahassee, Florida 32306 Tuesday, December 7 - Thursday, December 9, 1993 NEW THIS YEAR Special Tutorial/Vendor Day Monday, December 6, 1993 Organizers: Sudy Bharadwaj (sudy@sca.com), SCA Peter Dragovitsch (drago@scri.fsu.edu), SCRI, FSU Dennis Duke (dduke@scri.fsu.edu), SCRI, FSU Adam Kolawa (ukola@flea.parasoft.com), Parasoft Tim Mattson (tgm@SSD.intel.com), Intel Neil Lincoln (nrl@nips.ssesco.com), SSESCO Vaidy Sunderam (vss@mathcs.emory.edu), Emory University Cluster Workshop '93 continues the series of cluster computing workshops held at SCRI in 1991 and 1992. The nature of the workshop (submission, sessions, proceedings) is deliberately designed to enable maximum dissemination and exchange of information in the most timely manner possible. The goal of the workshop is to bring together people who are interested in the issues of using heterogeneous clusters of computers as computational resources. This group of people would include: - computational scientists or other end-users who desire to exploit the power and capabilities of heterogeneous clusters as an alternative to or in conjunction with conventional supercomputers or MPP's - software developers of queuing systems, parallel processing packages, and other software tools - system administrators interested in both the strategic and technical issues of running clusters, including in general any kind of heterogeneous collection of networked computers - vendors who are developing products to serve this market: (a) new generations of high performance workstations (b) new forms of packaging of workstations (c) new high speed networking products (d) new products for addressing mass storage and other I/O needs (e) any other applicable technology - persons who would like to share their experiences, and especially give critical advice on desired user requirements, shortcomings of present hardware and software configurations, successes and failures to date, overviews of planned projects, etc. Although most practical implementations of clusters to date consist of collections of RISC workstations, we are interested also in more general configurations which might include any number of distinct architectures, and a variety of new high-speed network interconnections. The format of the workshop will be to fill the day with contributed and invited talks, and to have informal evening gatherings designed to further enhance the opportunity for information exchange. We especially encourage contributions of a 'practical experience' nature, since this is likely to be of the most use to the many people who are involved in cluster computing, and will also be complimentary to the many conferences that stress more the academic side of computing and computer/computational science research. The tentative outline schedule for the workshop for Tuesday through Thursday is included below. We will adjust it depending upon the number of contributed papers. NEW FOR THIS YEAR will be a special tutorial/vendor day, Monday, December 6. The tentative program for that day is attached below. There will be no extra charge for registered workshop attendees for the tutorial/vendor day. Please register for the workshop using the form attached below. We encourage as many attendees as possible to plan to make a presentation at the workshop. We do ask that speakers plan to address topics within the scope outlined above. Please send us a short abstract describing your talk, so we can plan an appropriate place in the schedule. An announcement of the schedule of speakers will be distributed as soon as possible. Within the limits of available time, we would like to accommodate as many speakers as practicable. PLEASE SUBMIT ABSTRACTS BY MONDAY, NOVEMBER 1, 1993. The 'proceedings' of the workshop will be published via anonymous ftp. We will request each speaker to send us an appropriate electronic version of his talk (ascii, postscript, tex, latex, troff, etc.). These will then be placed on the machine ftp.scri.fsu.edu for further distribution. The proceedings of the 1991 and 1992 meetings are already on the machine. Any questions or requests can be sent via email to cluster-workshop@scri.fsu.edu or to one of the organizers. SCRI can be reached by phone at (904)-644-1010. ============================================================================== OUTLINE SCHEDULE Monday, December 6 8:00 am - 10:00 pm Tutorial/Vendor Program - Conference Center (see detailed schedule below) 6:00 - 8:00 pm Registration and Reception - Radisson Hotel Tuesday, December 7 7:30 - 8:30 Continental Breakfast and Registration - Conference Center 8:30 - 9:15 Invited Speaker Number 1 9:15 - 10:00 Invited Speaker Number 2 10:00 - 10:30 Break 10:30 - 12:00 Session 1 10:30 - 12:00 Session 2 12:00 - 1:00 Lunch 1:00 - 3:00 Plenary Session 3:00 - 3:30 Break 3:30 - 5:30 Session 3 3:30 - 5:30 Session 4 6:30 - 10:00 Hosted Reception at SCRI 6:30 - 8:00 Demos at SCRI 8:00 - 9:30 Moderated Session Wednesday, December 8 7:30 - 8:30 Continental Breakfast and Registration - Conference Center 8:30 - 9:15 Invited Spekaer Number 3 9:15 - 10:00 Invited Speaker Number 4 10:00 - 10:30 Break 10:30 - 12:00 Session 5 10:30 - 12:00 Session 6 12:00 - 1:00 Lunch 1:00 - 3:00 Session 7 1:00 - 3:00 Session 8 3:00 - 3:30 Break 3:30 - 6:00 Session 9 3:30 - 6:00 Session 10 6:30 - 10:00 Hosted Reception at SCRI 6:30 - 8:00 Demos at SCRI 8:00 - 9:30 Moderated Session Thursday, December 9 7:30 - 8:30 Continental Breakfast - Conference Center 8:30 - 10:00 Session 11 8:30 - 10:00 Session 12 10:00 - 10:30 Break 10:30 - 12:00 Session 13 10:30 - 12:00 Session 14 12:00 Workshop Ends ============================================================================== Cluster Workshop 93 Tutorial/Vendor Day FSU Conference Center Monday, December 6, 1993 For information contact Louis Turcotte (turcotte@bulldog.wes.army.mil) Sponsors: Hewlett-Packard (confirmed) others (invited) 7:45 - 8:00 Gather (Refreshments) 8:00 - 8:30 Welcome/Intro/Overview of day o Dennis Duke (SCRI) (Confirmed) o Louis Turcotte (MSU/ERC) (Confirmed) 8:30 - 9:30 Overview of Batch Environments (Tutorial) o Michael Nelson (NASA Langley Research Center)(Confirmed) 9:30 - 10:00 Break (Refreshments) 10:00 - 12:00 Batch product presentations o Condor: Miron Livny (UofWisc/Madison) (Confirmed) o DQS: Dennis Duke (SCRI) (Confirmed) o LoadLeveler: (IBM) (Confirmed) o LSF: Songnian Zhou (Platform Computing) (Confirmed) o NQS: (Sterling Software) (Invited) o TaskBroker: (HP) (Confirmed) 12:00 - 1:00 Lunch (Box) 1:00 - 3:00 Overview of Parallel Environments (Tutorial) o Sudy Bharadwaj (SCA) (Confirmed) o Tim Mattson (Intel) (Confirmed) o Doug Elias (Cornell Theory Center) (Confirmed) 3:00 - 3:30 Break (Refreshments) 3:30 - 5:30 Parallel product presentations o Linda: (SCA) (Confirmed) o PAMS: Wayne Karpoff (Myrias Computer Technologies) (Confirmed) o p4: Ewing Lusk (Confirmed) (Argonne National Laboratory) o Express: Adam Kolawa (Parasoft) (Confirmed) o PVM: Vaidy Sunderam (Emory University) (Confirmed) o xHPF: Bob Enk (Applied Parallel Research) (Confirmed) 7:00 - 10:00 Hardware vendor presentations and refreshments o Convex: Brian Allison (Confirmed) o DEC: (Confirmed) o HP: Mark Pacelle (Confirmed) o IBM: (Confirmed) o SGI: (Invited) o SUN: (Invited) =============================================================================== REGISTRATION AND HOTEL INFORMATION WORKSHOP ON RISC CLUSTER COMPUTING December 7-9, 1993 PLEASE TYPE OR PRINT Name _____________________________________ Social Security Number ___________________ (your SSN is optional, but without it any request for a registration refund will be delayed) Company __________________________________ Address/Mailstop __________________________________________________ City/State/Zip/Country ____________________________________________ Phone (___)______________________ Email address _______________________________________ Workshop Program Number: 1902694 The registration fee for the workshop is $145, and includes three continental breakfasts, Tuesday and Wednesday lunches, morning and afternoon break refreshments, and the food and drink for the evening sessions. There is no extra charge for the Monday tutorial/vendor day, but attendees for that day must be registered for the whole workshop. _____ Check here if you will attend the Monday tutorials. If you want to pay by check, please print out and fill in the above registration form, including especially the Workshop Program Number, and return the registration form and (payable to FSU) fee to: Center for Professional Development and Public Service Conference Registrar Florida State University Tallahassee, Florida 32306-2027 If you want to register by credit card, you may register by email by sending back an edited form of the above registration information, as well as the following credit card information: Credit Card Name (Mastercard or Visa ONLY)_____________________________ Credit Card Number ___________________________ Name (as it appears on card) __________________________________ Expiration Date of Card _____________________ There is an additional 2% charge by the University for credit card registrations (bringing the required total to $148). Email registrations, and all other inquiries about the workshop, may be sent to: email: cluster-workshop@scri.fsu.edu, or fax: (904)-644-0098, attention of Pat Meredith Hotel Information: The workshop hotel is the Radisson, which is within walking distance of the conference center and the FSU campus. Rooms are $60 per night, single or double, and reservations should be made by November 10. Be sure to mention Cluster Workshop '93 to get the special rate. Radisson Hotel 415 North Monroe Street Tallahassee, FL 32301 Phone and Fax: (904) 224-6000 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: meadorj@watt.oedison.com (Jim C. Meador) Subject: Massive Parallel Processors Organization: Ohio Edison What business functions are being met utilizing massively parallel computing? Is the main use in the area of super servers for data base access? I would be interested in hearing any success/failure stories also. Thanks! ================================================================= * Jim C. Meador * Ohio Edison Company * * Coordinator, Advanced Information * 76 South Main St. * * Technology Projects * Akron, OH 44308 * Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: 12 ways >Subject: [l/m 3/17/91] Ways to fool the masses with benchmarks (15/28) c.be. FAQ >Reply-To: eugene@amelia.nas.nasa.gov (Eugene N. Miya) >Organization: NASA Ames Research Center, Moffett Field, CA Keywords: who, what, where, when, why, how 15 12 Ways to Fool the Masses with Benchmarks 16 SPEC 17 Benchmark invalidation methods 18 19 WPI Benchmark 20 Equivalence 21 TPC 22 23 24 25 Ridiculously short benchmarks 26 Other miscellaneous benchmarks 27 28 References 1 Introduction to the FAQ chain and netiquette 2 3 PERFECT Club 4 5 Performance Metrics 6 7 Music to benchmark by 8 Benchmark types 9 Linpack 10 11 NIST source and .orgs 12 Measurement Environments 13 SLALOM 14 >From David Bailey et al. Quote only 32-bit performance results, not 64-bit results. Present performance figures for an inner kernel as the performance of the entire application. Quietly employ assembly code and other low-level language constructs. Scale up the problem size with the number of processors, but omit any mention of this fact. Quote performance results projected to a full system. Compare your results against scalar, unoptimized code on Crays. When direct run time comparisons are required, compare with an old code on an obsolete system. If megaFLOPS rates must be quoted, base the operation count on the parallel implementation, not on the best sequential implementation. Quote performance in terms of processor utilization, parallel speedup or megaFLOPS per dollar. Multilate the algorithm used in the parallel implementation to match the architecture. Measure parallel run times on a dedicated system, but measure conventional run times in a busy environment. If all else fails, show pretty pictures and animated videos, and don't talk about performance. Ref. 1 NAS TR # Ref. 2 %A David Bailey %T Twelve Ways to Fool the Masses when Giving Performance Results on Parallel Computers %J Supercomputing Review %V 4 %N 8 %D August 1991 %P 54-55 References Darrell Huff, How to Lie with Statistics How to Lie with Maps Gordon Bell's 11 rules of supercomputer design: 1) Performance, performance, performance. 2) Everything matters. 3) Scalar matter most. 4) Provide vectors as price allows. 5) Avoid holes in performance. 6) Place peaks in performance. 7) Provide a decade of addressing. 8) Make it easy to use. 9) Build on others work. 10) Design for the next one (and do it again) [three times] 11) Have slack resources. ^ A s / \ r m / \ c h / \ h t / \ i i / \ t r / \ e o / \ c g / \ t l / \ u A / \ r <_____________________> e Language Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gross@noether.ucsc.edu (Mike Gross) Subject: Parallel Fourier transforms? Date: 8 Oct 1993 21:38:58 GMT Organization: University of California, Santa Cruz Nntp-Posting-Host: noether.ucsc.edu Keywords: FFT,integral transforms I need to perform a real Fourier transform on an i860 supercomputer with local memory only. My array will not fit in a single processor's local memory. Does anyone out there know how to properly fragment my array without losing the long wavelength components in the transformed array? Better yet, are there any publicly available efficient parallel FFT routines lying around? It seems like I wouldn't be the first person to run into this problem. If you post a reply, please e-mail it to me also. Many thanks. Mike Gross ``` (o o) ------------------------oOO--(_)--OOo------------------------------------------ Michael A. Gross | | /| Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: johan@sys.toronto.edu (Johan Larson) Subject: coverage of parallel processing market? Organization: CSRI, University of Toronto I am a graduate student at the University of Toronto, working on a project for SIGPAR, the parallel processing discussion group. The project is a survey of the parallel processing market, focusing on commercial rather than research issues. Are there any periodicals which offer consistent coverage of the parallel processing market? Are there any other sources which I should consult? Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: achien@achien.cs.uiuc.edu (Andrew Chien) Subject: ISCA '94 Call for Papers 10/15 Deadline Organization: University of Illinois, Dept. of Comp. Sci., Urbana, IL CALL FOR PAPERS The 21st Annual International Symposium on Computer Architecture Chicago, Illinois Dates: April 18-21, 1994 Sponsored by Association for Computing Machinery/SIGARCH IEEE Computer Society Institute of Electrical and Electronics Engineers (IEEE) In Cooperation with The University of Illinois at Urbana-Champaign Eight copies of a double-spaced manuscript, in English, and not exceeding 6000 words in length, should be sent to the Program Chair. Papers will be accepted for consideration until October 15, 1993. A single cover sheet should be included which contains: paper title, full names, affiliations, complete addresses, phone and FAX numbers, and email addresses of the authors, as well as a 100 to 150 word abstract and a list of up to five keywords. Because the identity of authors will not be revealed to the referees, authors' names and affiliations must appear only on the cover sheet. Authors must avoid references and citations that compromise anonymity. Notification of acceptance for both regular and other presentations will be mailed by January 5, 1994. Authors of papers accepted as reg- ular papers will be requested to submit a final, camera-ready copy by February 15, 1994. for inclusion in the proceedings. Papers are solicited on any aspects of Computer Architecture. Topic areas include, but are not limited to, - Novel architectures and computing techniques - Multiprocessors, multicomputers, and distributed architectures - Superscalar, Superpipelined, and VLIW processors - Very-high performance architectures - Massively parallel architectures - Architectural implications of application characteristics - Non-numeric architectures - Technology impact on architecture - Language and operating systems support - Application-specific architectures - Performance evaluation and measurement - Memory systems As always, papers will be judged on their scientific merit and antici- pated interest to conference attendees. It is understood that papers in new areas are likely to contain less quantitative evaluations and comparisons than those in more established areas. As in previous years, a series of tutorials and workshops will be held immediately preceding and/or following the symposium. Tutorial and workshop proposals will be accepted until November 1, 1993. If you wish to organize a full or 1/2 day tutorial, please send to the Tutorials Co-Chairs five copies of a detailed proposal, including tutorial title, outline, brief description of topics to be covered, intended audience, assumed attendee background, and the name(s), affiliation(s), and resume(s) of the speaker(s). If you wish to organize a workshop, please send to either one of the the Workshops Co-Chairs five copies of a detailed proposal, including workshop title, description of its scope, list of invited partici- pants, and name(s) and affiliation(s) of the organizer(s). Steering Committee: Dharma P. Agrawal, North Carolina State University, Raleigh Forest Baskett, Silicon Graphics Lubomir Bic, University of California, Irvine Edward S. Davidson, University of Michigan John L. Hennessy, Stanford University Yale N. Patt, University of Michigan Alan J. Smith, University of California, Berkeley General Co-Chairs: Wen-mei W. Hwu Coordinated Science Laboratory University of Illinois 1308 W. Main St. Urbana, IL 61801 email: hwu@crhc.uiuc.edu Pen-Chung Yew Center for Supercomputing Research and Development University of Illinois 1308 W. Main St. Urbana, IL 61801 email: yew@csrd.uiuc.edu Program Chair: Janak H. Patel Coordinated Science Laboratory University of Illinois 1308 W. Main St. Urbana, IL 61801 email: patel@crhc.uiuc.edu Tutorials Chair: Prith Banerjee Coordinated Science Laboratory University of Illinois 1308 W. Main St. Urbana, IL 61801 email: banerjee@crhc.uiuc.edu Workshops Co-Chairs: Michael J. Foster Division of Microelectronics Information Processing Systems, National Science Foundation, 1800 G Street, NW Washington, DC 20550 email: mfoster@note.nsf.gov Kai Li Department of Computer Science 35 Olden Street, Princeton University, Princeton, NJ 08544 email:li@cs.princeton.edu Finance Chair: Jose A. Fortes School of Electrical Engineering Purdue University email: fortes@ecn.purdue.edu Registration Chair: Josep Torrellas Center for Supercomputing Research and Development University of Illinois 1308 West Main Street Urbana, IL 61801 email: torrella@csrd.uiuc.edu Publicity and Publication Chair: Andrew A. Chien Computer Science Department University of Illinois 1304 W. Springfield Ave. Urbana, IL 61801 email: achien@cs.uiuc.edu Local Arrangement Chair: John R. Barr Software Systems Research Laboratory Corporate Software Research and Development Motorola, Inc. email: barr@mot.com -- ========================== Professor Andrew A. Chien 1304 W. Springfield Avenue Department of Computer Science Urbana, IL 61801 University of Illinois Email: achien@cs.uiuc.edu Phone: (217)333-6844 FAX: (217)333-3501 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rick@cs.arizona.edu (Rick Schlichting) Subject: Kahaner Report: GRAPE Computer Followup-To: comp.research.japan Date: 10 Oct 1993 09:39:27 -0700 Organization: University of Arizona CS Department, Tucson AZ [Dr. David Kahaner is a numerical analyst on sabbatical to the Office of Naval Research-Asia (ONR Asia) in Tokyo from NIST. The following is the professional opinion of David Kahaner and in no way has the blessing of the US Government or any agency of it. All information is dated and of limited life time. This disclaimer should be noted on ANY attribution.] [Copies of previous reports written by Kahaner can be obtained using anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports.] From: Dr. David K. Kahaner US Office of Naval Research Asia (From outside US): 23-17, 7-chome, Roppongi, Minato-ku, Tokyo 106 Japan (From within US): Unit 45002, APO AP 96337-0007 Tel: +81 3 3401-8924, Fax: +81 3 3403-9670 Email: kahaner@cs.titech.ac.jp Re: GRAPE Computer 4 Oct 1993 This file is named "grape.93" ABSTRACT. Progress in the development of the GRAPE (GRAvity PipE) computer, developed at the University of Tokyo, for simulation of N-body systems. Newest version will have TFLOPs performance, using 2000 600MFLOP chips. Approximately two years ago I reported on efforts at the University of Tokyo to develop a special purpose parallel computer for many-body calculations arising in astrophysical simulations (see "grape.91", 22 Oct 1991). At that time a version capable of 10GFLOPs was being put together. Recently I asked the GRAPE team about their subsequent progress. They have been gracious to assemble a collection of English abstracts of current work that describes their activity much better than I could. These abstracts are reproduced below thanks to the cooperation of many scientists in this project, including Prfs Makino, Sugimoto, and others. I would especially like to thank Dr. Makoto Taiji Dept of Earth Science & Astronomy College of Arts and Sciences University of Tokyo Komaba 3-8-1, Meguro-ku, Tokyo 153 Japan Tel/Fax: +81 3 3465-3925 Email: TAIJI@KYOHOU.C.U-TOKYO.AC.JP for coordinating the gathering of these abstracts. A number of these papers will appear in the Proceedings of 27th Hawaii International Conference on System Sciences (HICSS-27), currently in press. Others appear in a special issue of the Publ Astron Soc Japan, Vol 45, 1993. Still others will appear in a another special issue of PASJ in the fourth quarter of 1994 or the first quarter of 1995 based on the proceedings of workshop, which was held at U-Tokyo's Komaba campus. The proceedings of that workshop will appear in March 1994. The material for these abstracts were given to me in TeX/LaTeX form. Mostly I have left some TeX commands but removed others for readability. -------------------------------------------------------------------------- HARP chip : A 600 Mflops Application-Specific LSI for Astrophysical $N$-body Simulations Makoto Taiji, Junichiro Makino*, Eiichiro Kokubo, Toshikazu Ebisuzaki, and Daiichiro Sugimoto Department of Earth Science and Astronomy, *Department of Information Science and Graphics, College of Arts and Sciences, University of Tokyo Komaba 3-8-1, Meguro-ku, Tokyo 153, Japan Phone : 81-3-3465-3925, Fax : 81-3-3465-3925 Internet : TAIJI@KYOHOU.C.U-TOKYO.AC.JP ABSTRACT: We have developed an application-specific LSI, the HARP (Hermite AcceleratoR Pipe) chip, which will be used in GRAPE-4, a massively-parallel special-purpose computer for astrophysical $N$-body simulations. The HARP chip calculates the gravitational interaction between particles. It consists of 15 floating point arithmetic units and one unit for function evaluation. The HARP chip performs about 20 floating point operations per clock cycle and works at 30 MHz in the worst case. Therefore, the performance of the HARP chip exceeds 600 Mflops. It is made using 1.0 $\mu$m CMOS cell-based ASIC (LSI Logic, LCB007). The die size is 14.6 mm $\times$ 14.6 mm and the total gate count is 95,000. The power consumption is 5W at operating voltage of 5V. We have examined successful operations of the sample chip up to 50 MHz. GRAPE-4 will consist of about 2000 HARP chips using multi-chip modules. The peak speed of GRAPE-4 will exceed 1 Tflops even in the worst case, and will reach around 1.8 Tflops in the typical case. ----------------------------------------------------------------------- HARP-1: A Special-Purpose Computer for $N$-body Simulation with the Hermite Integrator Eiichiro Kokubo, Junichiro Makino*, and Makoto Taiji Department of Earth Science and Astronomy, *Department of Information Science and Graphics, College of Arts and Sciences, University of Tokyo Komaba 3-8-1, Meguro-ku, Tokyo 153, Japan Phone : 81-3-3465-3925, Fax : 81-3-3465-3925 Internet : KOKUBO@KYOHOU.C.U-TOKYO.AC.JP ABSTRACT: We have designed and built HARP (Hermite Accelerator Pipeline)-1, a special-purpose computer for solving astronomical $N$-body problems with high accuracy using the Hermite integrator. The Hermite integrator uses analytically calculated derivatives of the acceleration, in addition to the acceleration, to integrate the orbit of particles. It has a better stability and allows a longer timestep than does an Adams-Bashforth-Moulton type predictor-corrector scheme of the same error order, which has been widely used for astronomical $N$-body problems. HARP-1 is a specialized computer for accelerating this Hermite scheme. It has a 24-stage pipeline to perform the calculation of the acceleration and its time derivative, which is the most expensive part of the Hermite scheme. Its structure is quite similar to GRAPE (GRAvity PipE). The only difference is that it calculates the time derivative of the acceleration, in addition to the acceleration. The pipeline calculates one gravitational interaction at every three clock cycles. Thus, the acceleration and its time derivative of a particle are calculated in $3N + 24$ clock cycles, where $N$ is the number of particles and 24 is the pipeline latency. The peak speed of HARP-1 is 160Mflops. ------------------------------------------------------------------------- GRAPE Project: An Overview Toshikazu EBISUZAKI, Junichiro MAKINO, Toshiyuki FUKUSHIGE, Makoto TAIJI, Daiichiro SUGIMOTO, Tomoyoshi ITO and Sachiko K. OKUMURA Department of Earth Science and Astronomy and Department of Information Science and Graphics, College of Arts and Sciences, The University of Tokyo, Komaba, Meguro-ku, Tokyo 153 ABSTRACT: We are developing a series of special-purpose computers, GRAPE (GRAvity PipE), for the simulation of $N$-body systems, such as proto-planetary systems, globular clusters, galaxies, and clusters of galaxies. In simulations of $N$-body systems, almost all computing time is consumed in calculating the gravitational force between particles. GRAPE calculates the forces at high speed using hard-wired pipelines. The host computer, which is connected to GRAPE, sends the positions of particles to GRAPE. Then, GRAPE calculates the force exerted on a particle and sends this value back to the host computer. Using the force calculated by GRAPE, the host computer then integrates the orbits of the particles. We have already developed six different machines (GRAPE-1, GRAPE-1A, GRAPE-2, GRAPE-2A, GRAPE-3, and GRAPE-3A), which are divided into low- and high-accuracy types. Those machines with odd numbers (GRAPE-1, GRAPE-1A, GRAPE-3, and GRAPE-3A) are among the low-accuracy type. They were designed for simulations of collisionless systems, such as galaxies, in which only the mean potential plays an important role. Simulations of such systems do not require high accuracy in the force calculation. GRAPE-1 and GRAPE-1A are machines made by wire-wrapping. GRAPE-3 is a highly parallel system with 48 full-custom LSI chips (GRAPE chip). Each LSI chip has one GRAPE pipeline. The sustained speed of GRAPE-3 is 10 Gflops for a 200,000 particle simulation. The machines with even numbers (GRAPE-2 and GRAPE-2A) are among the high-accuracy type. They were designed for collisional systems, such as globular clusters and proto-planetary systems, in which close encounters play an important role. In simulations of such collisional systems, we must calculate the force accurately. GRAPE-2 was the first machine of the high-accuracy type. GRAPE-2A was designed for applications involving molecular-dynamics simulations, as well as gravitational $N$-body simulations. GRAPE-2A can calculate the forces of an arbitrary functional form using interpolation tables. The computational speed of GRAPE-2A is 180~Mflops. We are now developing a highly parallel machine, GRAPE-4, in which many GRAPE pipelines (about 1,600) will work in parallel.} --------------------------------------------------------------------------- The GRAPE Software System Junichiro MAKINO and Yoko FUNATO Department of Information Science and Graphics, and Department of Earth Science and Astronomy, College of Arts and Sciences, The University of Tokyo, Meguro-ku, Tokyo 153 (Received July 21, 1992; Accepted September 21, 1992) ABSTRACT: We describe the software system used for GRAPE processors, special-purpose computers for gravitational $N$-body simulations. In gravitational $N$-body simulations, almost all of the calculation time is spent to calculate the gravitational force between particles. The GRAPE hardware calculates the gravitational force between particles using hardwired pipelines with a speed in the range of 100 Mflops to 10 Gflops, depending on the model. All GRAPE hardwares are connected to general-purpose workstations, on which the user program runs. In order to use the GRAPE hardware, a user program calls several library subroutines that actually control GRAPE. In this paper, we present an overview of the user interface of GRAPE software libraries and describe how they work. We also describe how the GRAPE system is used with sophisticated algorithms, such as the tree algorithm or the individual timestep algorithm. -------------------------------------------------------------------------- Highly Parallelized Special-Purpose Computer, GRAPE-3 Sachiko K. OKUMURA}, Junichiro MAKINO, Toshikazu EBISUZAKI, Toshiyuki FUKUSHIGE, Tomoyoshi ITO, and Daiichiro SUGIMOTO Department of Earth Science and Astronomy, and Department of Information Science and Graphics College of Arts and Sciences The University of Tokyo, Komaba, Meguro-ku, Tokyo 153 and Eiri HASHIMOTO, Koumei TOMIDA, and Noriaki MIYAKAWA FUJI XEROX Co., LTD, Naka, Atsugi, Kanagawa 243 (Received July 15, 1992; accepted September 21, 1992) ABSTRACT: We have developed a highly parallelized special-purpose computer, GRAPE (GRAvity PipE)-3, for gravitational many-body simulations. Its peak computing speed is equivalent to 15 Gflops. The GRAPE-3 system comprises two identical boards connected to a host computer (workstation) through the VME bus. Each board has 24 custom LSI chips(GRAPE chips) which calculate gravitational forces in parallel. The calculation of the gravitational forces is easily parallelized, since the forces on different particles can be calculated independently. One GRAPE chip running at a 10 MHz clock has a computing speed equivalent to 0.3 Gflops; the GRAPE-3 system with 48 GRAPE chips thus achieves a peak speed of 15 Gflops. The sustained speed of the GRAPE-3 system reached 10 Gflops-equivalent. ---------------------------------------------------------------------------- A Special-Purpose Computer for N-body Simulations: GRAPE-2A Tomoyoshi ITO, Junichiro MAKINO, Toshiyuki FUKUSHIGE, Toshikazu EBISUZAKI, Sachiko K. OKUMURA and Daiichiro SUGIMOTO Department of Earth Science and Astronomy, and Department of Information Science and Graphics, College of Arts and Sciences University of Tokyo, Komaba, Meguro-ku, Tokyo 153 ABSTRACT: We have developed GRAPE-2A, which is a back-end processor to accelerate the simulations of gravitational N-body systems, such as stellar clusters, proto planetary system, and the structure formation of the universe. GRAPE-2A calculates forces exerted on one particle from other particles. The host computer, which is connected to GRAPE-2A through the VME bus, performs other calculations such as the time integration. In the simulation of gravitational N-body systems, almost all computing time is consumed in the calculation of force between particles. GRAPE-2A performs this force calculation with a speed much faster than that of a general-purpose computer. GRAPE-2A can be used for the cosmological N-body simulation with periodic boundary conditions using the Ewald method, and for the molecular dynamics simulations of proteins and crystals. The computational speed of GRAPE-2A is 180 Mflops. ----------------------------------------------------------------------- HARP: A Special-Purpose Computer for $N$-body Problem Junichiro MAKINO, Eiichiro KOKUBO, and Makoto TAIJI (as above) (Received Aug. 21, 1992; Accepted Nov. 16, 1992) ABSTRACT: We present the concept of HARP (Hermite AcceleratoR Pipeline), a special-purpose computer for solving $N$-body problem using the Hermite integrator. A Hermite integrator uses analytically calculated derivatives of the acceleration, in addition to the acceleration, to integrate the orbit of a particle. It has a better stability and allows a longer timestep than does an Adams-Bashforth-Moulton type predictor-corrector scheme of the same error order. HARP is a specialized computer for this type of Hermite scheme. Its structure is quite similar to GRAPE; the only difference is that it calculates the time derivative of the acceleration, in addition to the acceleration. We are now developing HARP-1, the first prototype machine with a peak speed of 240 Mflops. The massively parallel GRAPE-4 will be based on this HARP architecture. Its speed will be about 1Tflops. -------------------------------------------------------------------------- WINE-1: Special-Purpose Computer for $N$-Body Simulations with a Periodic Boundary Condition Toshiyuki FUKUSHIGE, Junichiro MAKINO, Tomoyoshi ITO, Sachiko K. OKUMURA, Toshikazu EBISUZAKI and Daiichiro SUGIMOTO (as above) (Recieved 1992 July 31; accepted 1992 Octorber 6) ABSTRACT: We have developped WINE-1 (Wave space INtegrator for Ewald method), a special-purpose computer for $N$-body simulations with a periodic boundary condition. In $N$-body simulations with a periodic boundary condition such as cosmological $N$-body simulations, we use the Ewald method to calculate the gravitational interaction. With the Ewald method, we can calculate the interaction more accurately than a calculation with other methods, such as the PM method, the P$^3$M method, or the tree algorithm. In the Ewald method, the total force exerted on a particle is divided into contributions from real space and wave-number space so that the infinite sum can converge exponentially in both spaces. WINE is a special-purpose computer used to calculate the interaction in wave-number space. WINE is connected to a host computer via the VME bus. We have developed the first machine, WINE-1. It is one board having a size of 38 cm by 40 cm, on which 31 LSI chips and 46 IC chips are wire-wrapped. The peak speed of WINE-1 is equivalent to 480 Mflops. The summation in real space is calculated using a GRAPE system, another special-purpose computer for the direct calculation of the interparticle force. For example, we can perform a cosmological $N$-body simulation for $N$=80,000 (500 steps) within a week if we use GRAPE-2A for the summation in real space and WINE-1 for that in wave-number space. -------------------------------------------------------------------------- Publ. Astron. Soc. Japan 45, 377-392 and Plate 2 (1993) DREAM-1 : Special-Purpose Computer for Computational Fluid Dynamics Yousuke Ohno, Junichiro Makino,* Izumi Hachisu, Munetaka Ueno, Toshikazu Ebisuzaki, Daiichiro Sugimoto, and Sachiko K. Okumura^1 Department of Earth Science and Astronomy, *Department of Information Science and Graphics, College of Arts and Sciences, The University of Tokyo, Meguro-ku, Tokyo 153 and Yoshihiro Chikada^2 Nobeyama Radio Observatory, Minamimaki-mura, Minamisaku-gun, Nagano 384-13 (Received 1992 August 10; accepted 1992 October 13) 1 Present Address: Nobeyama Radio Observatory, Minamimaki-mura, Minamisaku-gun, Nagano 384-13. 2 Present Address: Nobeyama Astronomical Observatory, Mitaka, Tokyo 181. ABSTRACT: Based on the concept of DREAM (Disk REsource Array Machine), we have developed a special-purpose computer for computational fluid dynamics, i.e., the first model of DREAM, DREAM-1. In computer simulations of fluid dynamics, the number of grid points determines both the resolution and accuracy. On conventional computers, the number of grid points is limited by the size of the main memory. The essence of the DREAM concept is to use magnetic disk units as the main memory instead of silicon chips. Thus, we are able to use a main memory that is 100-times larger than that of silicon chips for the same cost. DREAM solves two difficulties of using magnetic disk units as the main memory. First, the data-transfer speed of a disk unit is slower than that of the silicon memory chips. We solve this difficulty by accessing many disk units in parallel and using a small cache. Second is the long access time of the disk unit. In finite-difference calculations, we access the data only in long vectors. The access time therefore becomes negligible. DREAM-1 has one vector processing unit (VPU) and one hard-disk unit. The peak speed of VPU is 12 Mflops. The capacity and the data-transfer rate of the hard-disk unit are 400 Mbyte and 2 Mbyte s$^{-1}$, respectively. We implemented one- and two-dimensional fluid dynamics codes on DREAM-1. The sustained computing speed was 4 Mflops for 1-D and 2.7 Mflops for 2-D. The speed of 4 Mflops is the naked VPU speed while the slow down to 2.7 Mflops of the 2-D simulations is due mainly to the overhead of disk access. We plan to increase the peak speed of the VPU to 20 Mflops and to build 4 units. These units are connected in a one-dimensional ring network. This parallel DREAM system will have a peak speed of 80 Mflops and will be able to perform a $(300)^3$ 3-D fluid calculation in 4--12 days. We also discuss the potential ability of DREAM: for example, the DREAM system can solve an I/O neck of such observational instruments as a CCD-camera or a VLBI. --------------------------------------------------------------------------- DREAM-1A : Special-Purpose Computer for Computational Fluid Dynamics Yousuke Ohno, Junichiro Makino*, Izumi Hachisu, Toshikazu Ebisuzaki, Daiichiro Sugimoto Department of Earth Science and Astronomy, *Department of Information Science and Graphics, College of Arts and Sciences, University of Tokyo, Meguro-ku Tokyo 153, Japan Email: OHNO@KYOHOU.C.U-TOKYO.AC.JP ABSTRACT: We have developed a special-purpose computer for computational fluid dynamics, DREAM-1A. DREAM-1A has a peak speed of 80 Mflops and a memory size of 1.6 Gbyte. DREAM-1A consists of four units connected in a one-dimensional bidirectional ring network. One unit of DREAM-1A has one vector processing unit (VPU) and one hard-disk unit. The physical variables are stored in hard disk, instead of RAM. The peak speed of a VPU is 20 Mflops. The capacity and the data-transfer rate of the hard-disk unit are 400 Mbyte and 1 Mbyte/s. We implemented one- and two-dimensional fluid dynamics codes on one unit of DREAM-1A. The sustained computing speed of this unit was 6.4 Mflops for 1-D and 3.6 Mflops for 2-D. In 1-D calculation, all variables are stored in small RAM in VPU, while in 2-D calculation, data are stored in hard disk. Thus the speed of 6.4 Mflops represent the raw performance of VPU, while the slow down to 3.6 Mflops of the 2-D simulations represent the performance of the total system. The speed of data transfer between processor units is 40 Mbyte/s. We are implementing 2-D and 3-D fluid dynamics codes on the parallel DREAM system. DREAM-1A will have a sustained computing speed of 14 Mflops and will be able to perform a $(256)^3$ 3-D fluid calculation in 3--8 days. ------------------------------------------------------------------------- The evolution of massive black hole binaries in merging galaxies I. Evolution of binary in spherical galaxy Junichiro Makino, Toshiyuki Fukushige, Sachiko K. Okumura, and Toshikazu Ebisuzaki (as above) ABSTRACT: We investigate the evolution of the binary massive black hole formed by merging of galaxies containing central black holes. When two galaxies merge, their central black holes sinks towards the center of the merger because of the dynamical friction and form a binary system. This black hole binary becomes harder because of dynamical friction from field stars. At the same time, field stars gain kinetic energy and the core of the galaxy expands. When the periastron distance between black holes has become small enough, the emission of the gravitational wave would cause black holes to merge. In this paper, we studied the timescale of the merging and the amount of the energy deposited to the core, by means of direct $N$-body simulation with 16384+2 particles. We found that the black hole binary tends to have large eccentricity ($\gtorder 0.9$). This is because the evolution is driven by the dynamical friction from the field stars. The dynamical friction is strongest at the apocenter, since its strength is inversely proportional to the third power of the velocity. The binding energy of the binary per unit mass becomes $\sim 10$ times as large as the kinetic energy of field particles in the crossing timescale of the core. For a typical elliptical galaxy with the internal velocity dispersion of $300 {\rm km~s}^{-1}$, the velocity of the binary at the periastron would easily reach $3000 {\rm km~s}^{-1}$, for which timescale of merging by the radiation of the gravitational wave is $\ltorder 10^9$ years. Most of black hole binaries formed by merging of ellipticals would merge in a time much shorter than the Hubble time. ------------------------------END OF REPORT---------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: gross@noether.ucsc.edu (Mike Gross) Newsgroups: comp.parallel,sci.math.num-analysis Subject: Parallel Fourier transforms Organization: University of California, Santa Cruz I need to solve Poisson's equation on an i860 supercomputer with local memory only. I would like to use Fourier transform methods to solve the equation, but it is not obvious to me how to perform a global operation such as a Fourier integral (or FFT) efficiently on data that must be fragmented across several processors. In order to get the dynamic range I need in my simulation, I require a space-domain mesh that is several times the size of local memory. Does anyone out there know of any good references for this problem? Or better yet, are there any publicly available routines? My problems sounds like one that has been attacked many times by parallel numerical analysts. I hope this isn't a FAQ. If you post a response, please e-mail a copy to me. Many thanks. Mike Gross Physics Board and Lick Observator Univ of California GO SLUGS!!!!! Santa Cruz, CA 95064 gross@lick.ucsc.edu (408) 459-4588 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: "Bruce Shriver" Newsgroups: comp.parallel,comp.sys.super,comp.databases Subject: CHISQ Ltd. Organization: PSI Public Usenet Link It's my understanding that CHISQ Ltd. of the UK markets the following two products for MasPar and DECmpp systems 1. TABLE-MAKER 2. SAS/TM ACCESS TABLE-MAKER provides large-scale statistical analysis capabilities such as data correlation, averaging, and histogramming. It summarizes and extracts subsets from databases. SAS/TM ACCESS integrates TABLE-MAKER into the SAS statistical package. I'm interested in learning if these products are available on any other MPP (SIMD or MIMD) systems or on any clustered workstation systems. Any comments about experiences using TABLE-MAKER and SAS/TM ACCESS (ease of use designing, implementing, debugging applications, performance, etc.) would also be appreciated. Thanks, Bruce b.shriver@computer.org Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.parallel,comp.benchmarks,comp.sys.super,eunet.esprit,uk.announce From: conf@dcs.warwick.ac.uk (PEPS Conference Administrator) Subject: PEPS'93 Workshop Organization: Department of Computer Science, Warwick University, England PERFORMANCE EVALUATION OF PARALLEL SYSTEMS Programme and Registration Details 29 - 30 November 1993 University of Warwick, Coventry, UK Sponsored by: The Commission of the European Communities Esprit Programme The British Computer Society FSG, PPSG The Technical Cooperation Program (TTCP) Workshop Overview ----------------- This is the first in a series of annual International Workshops bringing together state of the art research and experience in the Performance Evaluation of Parallel Systems. Topics include: Parallel Systems Characterisation Modelling Monitoring Benchmarking Participation is welcome from users, vendors and researchers in any of the above areas. Programme --------- Monday 29 November 9:15 Welcome: Vice Chancellor (University of Warwick) 9:30 Invited talk: Horst Forster (Head, CEC, HPCN Programme) 10:15 Coffee Break 10:45 Benchmarking 1 Chair: Ed Brocklehurst (National Physical Lab.) Innovations in Benchmarking: T Chambers SOSIP and the PEPS Benchmarking Methodology: T Mansfield The RAPS Initiative: K Solchenbach The GENESIS Benchmark Suite: Current State and Results: VS Getov, AJG Hey, RW Hockney, IC Wolton 12:30 Lunch (Poster Session) 14:00 Characterisation Chair: Graham Nudd (University of Warwick) A Layered Approach to Modelling Parallel Systems for Performance Prediction: E Papaefstathiou Performance Evaluation of Parallel Computers Using Algorithmic Structures: E Onbasioglu and Y Paker A Petri Net Technique for Assessing Performance of Multiprocessor Architectures HD Johnson, J Delgado-Frias, S Vassiliadis and DM Green Multiprocessor Benchmarking Using Abstract Workload Descriptions: E Bartscht and J Brehm 15:30 16:00 Performance Analysis Chair: Salvo Sabina (Intecs) Design issues in Performance Monitoring of Parallel Systems - The PEPS Approach MR Nazzarelli and S Sabina An Analytic Model for a Parallel Computer - Prediction of a Shared Block Behavior K Joe and A Fukuda Efficiency of Parallel Programs in Multi-Tasking Environments: T Schnekenburger 17:00 Vendors Session and Exhibition Chair: Chris Lazou (HiPerCom) 19:00 Dinner 20:30 Panel Discussion Chair: Anne de Baas (CEC, HPCN Programme) Tuesday 30 November 9.00 Invited talk: Robert Hiromoto (University of Texas) 9.30 Performance Modelling and Tools Chair: Jean-Marc Talbot (Simulog The PEPS Modelling Tools: F Manchon Performance Evaluation of Parallel Programs on the Data Diffusion Machine: P Stallard, H Muller and D Warren PRM Net Performance Modelling during Design and Implementation: MEC Hull and PG O'Donoghue Modelisation of Communication in Parallel Machines within the ALPES Project: C Tron and B Plateau 11:00 Coffee Break 11:30 Applications Chair: Trevor Chambers (National Physical Lab.) Programmer Effort in the Development of Parallel Lengendre Transforms: D F Snelling The Use of Serialization Approach in the Design of Parallel Programs Exemplified by a Problem in Applied Celestial Mechanics: VV Savchenko and VO Vishnjakov A New Technique to Improve Parallel Automated Single Layer Wire Routing: H Keshk, S Mori, H Nakashima and S Tomita Seismic Applications on Parallel MIMD Machines Lessons from Experience: S Kapotas and W Karpoff 12:30 Lunch (Poster Session) 14:00 Performance Application Chair: Tony Hey (Southampton University) The MEASURE Image Synthesis Benchmark: A Biersack and R Hodicke Benchmarking for Embedded Control and Real-Time Applications: H Lindmeier, D Rauh, M Ronschkowiak Towards a Benchmark for Scalable Parallel Database Machines: J Kerridge, I Jelly, C Bates and Y Tsitogiannis 15.00 Coffee Break 15.30 Benchmarking 2 Chair: Aad van der Steen (ACC, Utrecht) A Performance Assessment Support System for Advanced Software Systems: F Kwakkel and ML Kersten The Genesis Benchmarking Methodology: R Hockney Benchmarking experience on KSRI: VS Pillet, A Clo and B Thomas Registration Details -------------------- The registration fee is #270 (Pounds Sterling) and as attendance is limited to 100 delegates, and an early response is recommended. Please include payment with your registration form. Payment should be in UK Pounds Sterling by cheque or international money order, drawn on a UK bank. Registration cannot be confirmed without payment. Registration at the workshop will be in Radcliffe House on the evening of Sunday 28th November when a buffet supper will be available. Accommodation is in single rooms with en suite facilities in Radcliffe House. Accommodation and meals are included in the registration fee. The technical sessions will be held in Scarman House. Travel to the University of Warwick ----------------------------------- There are frequent train services between London and Coventry (approximately at 30 minute intervals) and the travel time from London Euston station is about 75 minutes. The University is 3 miles from Coventry train station. There are usually plenty of taxis available. Birmingham International Airport has good connecting train services to Coventry (a journey of about 10 miles). By car From the North M1, M69 follow the By-pass routes marked Warwick (A46), then follow the signs to the University, which is on the outskirts of COVENTRY. From the South M1, M45, A45 or M40, A46, follow the signs to the University. From the East join the M1, then follow directions as for travel from the North or the South. From the West M5, M42, A45, follow the signs for the University. Name (and title): ------------------------------------------------------------------------------- Affiliation: ------------------------------------------------------------------------------- Position: ------------------------------------------------------------------------------- Address: ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Telephone: Fax: ---------------------------------- --------------------------------------- Email: ------------------------------------------------------------------------------- Please send registration forms and payment to: PEPS Workshop Department of Computer Science University of Warwick Coventry CV4 7AL UK Tel: +44 203 523193 Fax: +44 203 525714 Email: conf@dcs.warwick.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cheekong@iss.nus.sg (Chui Chee Kong) Subject: iterative linear system solver Date: 11 Oct 1993 09:39:06 GMT Organization: Institute Of Systems Science, NUS Will appreciate if someone can send me references on solving large size linear system of equations on preferably distributed system. Thanks. Just an anonymous engineer, chee kong internet: cheekong@iss.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: montoya@ciril.fr (Frederic Montoya,CIRIL,,) Subject: HPF Compilers... Date: 11 Oct 1993 10:39:23 +0100 Organization: CIRIL, Nancy, France Hi, which vendors are actually proposing a full features HPF compiler ? Is there any public domain translator (eg HPF -> F77 + PVM) ? Thanx in advance, Frederic. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mbergerm+@CS.CMU.EDU (Marcel Bergerman) Subject: Help with parallel computing references Message-ID: Dear netters, I would like to know of references on the better allocation of processors for large, numerical-intensive parallel applications. The same sequential algorithm can have several parallel implementations and it would be good to have some criteria to decide which one is the best before even beginning to program. Send all answers to me and I'll summarize them to the group. Thank you, --Marcel p.s.: Actually, I am posting this for a friend who does not have access to this bboard. Sorry if this is "bogus" or a FAQ. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 11 Oct 93 15:57:55 +0100 From: Dave Beckett Subject: [LONG] Update to transputer,occam and parallel computing archive Summary: More than a gigabyte transferred! Organization: Computing Lab, University of Kent at Canterbury, UK. Keywords: transputer, occam, parallel, archive In the last two and a half weeks I've added another 3M files to the archive at unix.hensa.ac.uk in /parallel. It currently contains over 49 Mbytes of freely distributable software and documents, in the transputer, occam and parallel computing subject area. That's about 3 Mbytes more than last time. The (old) /parallel/Changes file is reproduced below. Statistics: Over 2380 users (400 more than last time), over 1200 Mbytes transfered (260MB more) since the archive was started in early May. Top 10 files accessed, excluding Index files 561 /parallel/README 280 /parallel/pictures/T9000-schematic.ps.Z 273 /parallel/reports/misc/soft-env-net-report.ps.Z 221 /parallel/documents/inmos/occam/manual3.ps.Z 170 /parallel/Changes 155 /parallel/reports/ukc/T9000-systems-workshop/all-docs.tar.Z 151 /parallel/software/folding-editors/origami.tar.Z 122 /parallel/books/prentice-hall 106 /parallel/ls-lR.Z 96 /parallel/software/folding-editors/fue-ukc.tar.Z Again, looks mostly the same as last time. The material can be found at the HENSA (Higher Education National Software Archive) UNIX archive site. The HENSA UNIX archive is accessible via an interactive browsing facility, called fbr as well as email, DARPA ftp, gopher and NI-FTP (Blue Book) services. For details, see below. The files are all located in /parallel and each directory contains a short Index file of the contents. If you want to check what has changed in between these postings, look at the /parallel/Changes file which contains the new files added. NEW FEATURES ~~~~~~~~~~~~ Gopher access. See below. NEW FILES since 23th September 1993 (newest first) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /parallel/conferences/share-HPC-day Details of the talks taking place during the High Performance Computing Day being held as part of the SHARE Europe Anniversary meeting on 27th October 1993 at The Hague, The Netherlands. /parallel/documents/vendors/APR/parallelizing-products Details of APR parallelizing products (FORGE 90, xHPF77) /parallel/journals/lisp-and-symbolic-computation Call for papers and information for Lisp and Symbolic Computation journal published by Kluwer. /parallel/reports/dec-crl/contention-in-shared-mem-algorithms Announcement of new CRL Technical Report 93/12 on Contention in Shared Memory Algorithms by Cynthia Dwork, Maurice Herlihy and Orli Waarts. /parallel/faqs/PVM-technical PVM Version 3 Frequently Asked Questions - Technical from comp.parallel.pvm newsgroup by Bob Manchek /parallel/conferences/parle94 Call for papers for Parallel Architectures and Languages Europe (PARLE) 1994 being held from 13th-17th June 1994 at the Intercontinental Hotel, Athens, Greece. Deadlines: Full Draft: 19th November 1993; Acceptance: 11th February 1994; Camera-ready copies: 18th March 1994. /parallel/faqs/parallel-data-compression Summary of responses to a query about parallel data compression by Alf-Christian Achilles . /parallel/conferences/ispcs93 Details of the 1993 International Workshop on Intelligent Signal Processing and Communication Systems (ISPCS '93) being held from 27th-29th October 1993 at Aoba Memorial Hall, Tohoku University, Sendai, Japan. /parallel/user-groups/isug/isug_newsletter.09.93 Intel Supercomputer Users Group newsletter for September 1993 from export.ssd.intel.com in /pub/isug. /parallel/user-groups/isug/isug_newsletter.09.93.announce Announcement of the above newsletter /parallel/software/tmsc40/tick.txt TMS320C40 parallel network detection and loader utility v2.0 (September 1993) supporting Transtech, Hunt, and Traquair boards hosted by DOS, SunOS and Linux by Ben Abbott and Akos Ledeczi of Vanderbilt University, USA with Modifications for UNIX by Mark Milligan University of British Columbia and Rolf Skeie of NDRE. /parallel/software/tmsc40/tick.zip PKZIP archive of TICK v2.0 for x86 computers running MSDOS or Linux, and Suns running SunOS. /parallel/conferences/compsac93 Details (Advance Programme) of IEEE Computer Society's Seventeenth Annual International Computer Software & Applications Conference - COMPSAC93 being held from 1st-5th November 1993 at the Mesa Hilton Pavilion, Mesa (Phoenix suburb), Arizona, USA. /parallel/conferences/conpar-vappVI-94 Call for papers for the 1994 International Conferences on Parallel Processing (CONPAR) / Vector and Parallel Processors in Computational Science meeting VI (VAPP VI) being held from 6th-8th September 1994 at the Johannes Kepler University of Linz, Austria. Deadlines: Final call for papers: October 1993; Submission of complete papers: 15th February 1994; Acceptance: 1st May 1994; Final paper: 1st July 1994. /parallel/documents/vendors/cray/cray-t3d.announcement Announcment of the CRAY T3D by Cray Research, Inc. /parallel/bibliographies/icpp93-bib.ref Draft bibliography of 1993 International Conference on Parallel Processing (ICPP 93) by Eugene Miya in refer format. /parallel/reports/misc/status-of-parallel-processing-education.announcement How to get hold of the current version of "The Status of Parallel Processing Education: 1993" from University of New York at Buffalo by Russ Miller /parallel/conferences/par-finite-element-computations Call for participation in the symposium: Parallel Finite Element Computations being held from 24th-27th October 1993 at the Supercomputer Institute, 1200 Washington Avenue South, Minneapolis, Minnesota, USA sponsored by the University of Minnesota Supercomputer Institute and US Army High Performance Computing Research Center. /parallel/documents/misc/PPE_Survey.announcement /parallel/documents/misc/PPE_Survey.txt The Parallel Programming Evaluation Survey -- Doug Elias of the Software and Consulting Support Group, Cornell Theory Center, Ithaca, NY, USA is taking a survey questionnaire on tools used to obtain parallelism in applications. The survey is PPE_Survey.txt and his comments are PPE_Survey.announcement. /parallel/conferences/micro-26 Call for participation for the 26th Annual ACM/IEEE International Symposium on Microarchitecture with special emphasis on Instruction-Level Parallel Processing being held at The Austin Marriott at the Capitol, Austin, Texas, USA from 1st-3rd December 1993. Sponsored by ACM SIGMICRO and IEEE TC-MICRO. /parallel/faqs/mesh-of-buses Summary of responses to a query about works on Meshes of (optical) Buses by ricki R. Wegner /parallel/conferences/ess93 Call for participation in the European Simulation Symposium 1993 with the following themes: Dynamic Modelling and Information Systems; Multimedia Systems and Virtual Reality; High-performance Computing; Simulation and New Trends in Methods and Tools. THe symposium is being held from 25th-28th October 1993 at the Faculty of Mechanical Engineering, Delft University of Technology, The Netherlands. /parallel/courses/csp Introductory CSP (Communicating Sequential Processes) - A Tool-Based Course being given by Formal Systems (Europe), Ltd from 29th November -3rd December 1993 in Boston, MA, USA. /parallel/conferences/rovpia94 Details and call for papers for International Conference on Robotics, Vision and Parallel Processing for Industrial Automation (ROVPIA'94) being held from 26th-28th May 1994 at the School of Electrical and Electronic Engineering, Perak Campus of University of Science, Malaysia. Deadlines: Abstract: 30th Oct, Acceptance: 30th Nov, Camera-ready copies: 30th Dec. /parallel/faqs/parallel-garbage-collection A summary of responses to a query about parallel garbage collection algorithms by Gritton Gregory Vance /parallel/documents/vendors/maspar/MasPar-Challenge Details of the MasPar Challenge for 1993. /parallel/journals/IEEE-computer-assoc-processing Call for papers and referees for November 1994 Special Issue of IEEE COMPUTER on Associative Processing and Processors. Deadlines: Abstract: 1st October; Draft Paper: 1st December. /parallel/bibliographies/archindependent.bib /parallel/bibliographies/archindependent.txt BibTeX bibliography by David Skillicorn on aspects of parallelism relating to: architecture independence, programming models, categorical approaches to parallelism, general purpose parallelism, and theoretical and foundational results relating to these. OTHER HIGHLIGHTS ~~~~~~~~~~~~~~~~ /parallel/documents/occam/manual3.ps.Z The latest draft (March 31 1992) of the occam 3 reference manual by Geoff Barrett of INMOS. This is freely distributable but is copyrighted by INMOS and is a full 203 page book in the same style of the Prentice Hall occam 2 reference manual. Thanks a lot to Geoff and INMOS for releasing this. /parallel/journals/Wiley/trcom/* LaTeX (.sty) and BibTeX (.bst) style files and examples of use for the forthcoming Wiley journal - Transputer Communications, organised by the World occam and Transputer User Group (WoTUG). See /parallel/documents/journals/transputer-communications.cfp for details on how to submit a paper. /parallel/software/folding-editors/origami.zip /parallel/software/folding-editors/origami.tar.Z An updated version of the origami folding editor distribution as improved by Johan Sunter of Twente, Netherlands. The PKZIP 2.0 compatible origami.zip archive contains all the files needed for running the editor on MSDOS, but no sources and the origami.tar.Z file contains all the sources and keymaps as well as binaries for SPARC architectures and for MSDOS systems. /parallel/reports/wotug/T9000-systems-workshop/* The reports from the T9000 Systems Workshop held at the University of Kent at Canterbury in October 1992. It contains ASCII versions of the slides given then with the permission of the speakers from INMOS. Thanks to Peter Thompson and Roger Shepherd for this. Subjects explained include the communications architecture and low-level communications, the processor pipeline and grouper, the memory system and how errors are handled. /parallel/papers/ukc/peter-welch/* Eleven papers by Professor Peter Welch and others of the Parallel Processing Group at the Computing Laboratory, University of Kent at Canterbury, England related to occam, the Transputer and other things. Peter is Chairman of the World occam and Transputer User Group (WoTUG) /parallel/software/inmos/iservers/* Many versions of the iserver- the normal version, one for Windows (WIserver), one for etherneted PCs (PCServer) and one for Meiko hardware. /parallel/software/vcr/*,../parmacs/* Software from University of Southampton: VCR - an occam compiler with virtual channel support [requires INMOS occam toolset] plus a version of the parallel macros for FORTRAN that runs over it (using a port of F2C which is supplied) [requires INMOS C toolset]. /parallel/software/folding-editors/* Lots of different versions of folding editors including origami and folding micro-emacs, traditionally used for occam programming environments. /parallel/parlib Mirror of the PARLIB archive maintained by Steve Stevenson, the moderator of the USENET group comp.parallel. Also available: /pub/misc/ukc.reports The internal reports of the University of Kent at Canterbury Computing Laboratory. Many of these contain parallel computing research. /netlib/p4, /netlib/pvm, /netlib/pvm3, /netlib/picl, /netlib/paragraph, /netlib/maspar Mirror of the netlib files for the above packages. Coming Soon ~~~~~~~~~~~ A better formatted bibliograpy of the IOS press (WoTUG, NATUG et al) books. A HUGE bibliography of occam papers, PhD theses and publications - currently about 2000 entries. The rest of the INMOS archive server files. WoTUG related papers and information. NATUG information and membership form. A transputer book. A freely distributable occam compiler for workstations. There are several ways to access the files which are described below - log in to the archive to browse files and retrieve them by email; transfer files by DARPA FTP over JIPS or use Blue Book NI-FTP. Logging in: ~~~~~~~~~~~ JANET X.25 network: call uk.ac.hensa.unix (or 000049200900 if you do not have NRS) JIPS: telnet unix.hensa.ac.uk (or 129.12.21.7) Once connected, use the login name 'archive' and your email address to enter. You will then be placed inside the fbr restricted shell. Use the help command for up to date details of what commands are available. Transferring files by FTP ~~~~~~~~~~~~~~~~~~~~~~~~ DARPA ftp from JIPS/the internet: site: unix.hensa.ac.uk (or 129.12.21.7) login: anonymous password: Use the 'get' command to transfer a file from the remote machine to the local one. When transferring a binary file it is important to give the command 'binary' before initiating the transfer. For more details of the 'ftp' command, see the manual page by typing 'man ftp'. The NI-FTP (Blue Book) request over JANET path-of-file from uk.ac.hensa.unix Username: guest Password: The program to do an NI-FTP transfer varies from site to site but is usually called hhcp or fcp. Ask your local experts for information. Transferring files by Email ~~~~~~~~~~~~~~~~~~~~~~~~~~ To obtain a specific file email a message to archive@unix.hensa.ac.uk containing the single line send path-of-file or 'help' for more information. Browsing and transferring by gopher ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >From the Root Minnesota Gopher gopher, select the following entries: 8. Other Gopher and Information Servers/ 5. Europe/ 37. United Kingdom/ 14. HENSA unix (National software archive, University of Kent), (UK)/ 3. The UNIX HENSA Archive at the University of Kent at Canterbury/ 9. PARALLEL - Parallel Computing / and browse the archive as normal. [The numbers are very likely to change] The short descriptions are abbreviated to fit on an 80 column display but the long ones can always be found under 'General Information.' (the Index files). Updates to the gopher tree follow a little behind the regular updates. DONATIONS ~~~~~~~~~ Donations are very welcome. We do not allow uploading of files directly but if you have something you want to donate, please contact me. Dave Beckett Computing Laboratory, University of Kent at Canterbury, UK, CT2 7NF Tel: [+44] (0)227 764000 x7691 Fax: [+44] (0)227 762811 Email: djb1@ukc.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ganesan@cs.buffalo.edu (Ravikanth Ganesan) Subject: Scheduling in Encore Multimax This is similar to my previous post about scheduling on hypercube. Assume I have a configuration of 4 processors on an Encore Multimax machine. How are newly created jobs assigned to a processor ? Again the question of whether there is a global queue or a local queue for each processor. Once a job is assigned to a processor do all the processes created by the same job get assigned to run on the same processor ? What if the job is thread-based ? Do all the threads of the same job get to run on the same processor or different processors ? What criteria are adopted ? Please provide references if you can. Any leads is welcome. thanks ravi -- + RAVIKANTH GANESAN + + [OFF] 334 Bell Hall, CS Dept. [E-Mail] ganesan@cs.buffalo.edu + + SUNY@Buffalo, NY 14260 + (716)645-2193 [Res] (716)834-3171 + Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: misc.jobs.offered,comp.parallel,comp.os.mach From: lernst@SSD.intel.com (Linda Ernst) Subject: OS Designers, Intel, Beaverton, Oregon, USA Organization: Supercomputer Systems Division (SSD), Intel The Supercomputing Systems Division of Intel, has positions available now in Beaverton, Oregon for Senior Software Engineers, Operating Systems. We are a leading supplier of massively parallel supercomputers, which run a fully distributed version of OSF1/AD (Mach microkernel, Unix server) on 1000+ nodes, producing 100s of gigaFLOPS and terabytes of data. Not for the faint of heart :-) Job descriptions are attached. Please mail resumes (please ABSOLUTELY no FAXes, no phone calls, no e-mail): Linda Ernst c/o Intel Corporation Mail Stop CO1-01 5200 N.E. Elam-Young Parkway Hillsboro, OR 97124-6497 =============================================================================== Position #1: Operating System Designer, Memory Management Description: Specify, design, prototype and implement an advanced distributed memory management architecture in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Includes collaboration with with internal and academic applications researchers to provide high performance operating system support for new parallel programming models. Education and Skills: Minimum BSCS, Masters preferred, 6 to 10 years programming experience, 3 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer and scalable operating system experience a plus, experience in the areas of supercomputing a plus. Design experience with memory management required. =============================================================================== Position #2: Operating System Designer, Message Passing Description: Design, prototype and implement message passing-related features, and new message passing protocols, in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Education and Skills: Minimum BSCS, Masters preferred, 5 to 8 years programming experience, 2 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer operating system experience a plus, experience in the areas of supercomputing a plus. Experience with message passing highly desirable. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 11 Oct 93 16:59:57 PDT From: ahicken@parasoft.com (Arthur Hicken) Subject: Parallel Programming Course Hands-on Parallel/Distributed Programming Courses offered. ParaSoft Corporation, the leader in distributed and parallel computing tools, will conduct a hands on, introductory course on the theory and practice of distributed and parallel computing. This unique, hands-on focus of this course, 75% of the total time, assures that participants will gain a practical understanding of distributed computing applications. Each participant will program on a workstation linked to a network within the lab, to demonstrate and verify theoretical concepts presented in the seminar. This course has been timed to take place after the end of the Cluster Workship at Florida State, so you can plan to attend both if you'd like. Course Goals: Upon completion of the course, the participant will be able to: 1. Set up a simple job dispatcher with dynamic load balancing. 2. Build an application which runs on multiple platforms. 3. Implement process communication for tightly coupled applications. Course Content: 1. Theory - Introduction to parallel/distributed computing, programming models, programming environments. 2. Labs - Machine setup, Running parallel/distributed programs, basic parallel/distributed I/O, message passing, global operations, data decomposition, heterogeneous computing. Prerequisites: 1. Working knowledge of C or Fortran. 2. Familiarity with Unix 3. Strong desire to learn about distributed computing. Dates : Friday, December 10 - Sunday, December 12 Location : Florida State University, Tallahassee, Florida Instructors: Dr. Adam Kolawa - World wide expert and lecturer on distributed computing Lab Setup: Each participant will develop distributed applications at a workstation on a network within the lab. Cost: $495 - includes a complete set of tutorial materials and Express manuals. Lunches and the evening receptions are included. Cost can be credited toward purchase of Express, or toward application development services. Educational Discount: Only $200 for university personnel and students. Participation: Strictly limited to 15 people. Please call or send email to parasoft early to reserve your space. Applications are accepted on a first-come, first-serve basis. Additional Courses are available to graduates from the Level I course: Level II - 3 days. Covers parallel/distributed: debugging, graphics, performance monitoring, parallelization techniques, asynchronous programming, basic parallel/distributed application skeletons, etc. Level III - 3 days. Covers application of topics learned in the level I and II courses by applying these techniques on real applications. A copy of the transparencies used in the course can be obtained from the ParaSoft anonymous ftp server at ftp.parasoft.com (192.55.86.17) in the /express/classes directory. For more information contact: ParaSoft Corporation 2500 E. Foothill Blvd. Pasadena, CA 91107-3464 voice: (818) 792-9941 fax : (818) 792-0819 email: info@parasoft.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: a92188@cs.ait.ac.th ( Mr. Wu Xiao Wen (PR) ) Subject: Open files in pvm slave process Organization: UTexas Mail-to-News Gateway I wrote a program tried to open a file in a pvm slave process, but it failed. Can anybody tell me if I can open a file in the slave program ? Xiaowen Wu a92188@cs.ait.ac.th Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.parallel,comp.lang.functional From: schreine@risc.uni-linz.ac.at (Wolfgang Schreiner) Subject: Parallel Functional Programming Bibliography Followup-To: comp.lang.functional Keywords: parallel computation, functional programming Organization: RISC-Linz, Johannes Kepler University, Linz, Austria This is a repost of an article I've posted some 3 months ago. Recently I learned that due to a news server error, this posting probably did not leave our local domain, so I'll try again ... -- I've compiled an annotated bibliography on parallel functional programming that might be useful for some of you. The bibliography lists more than 350 publications mostly including their *full abstracts*. You can retrieve the paper by anonymous ftp from ftp.risc.uni-linz.ac.at (193.170.36.100) in pub/reports/parlab/pfpbib.dvi.Z (or pfpbib.ps.Z). Here the abstract: This bibliography cites and comments more than 350 publications on the parallel functional programming research of the last 15 years. It focuses on the software aspect of this area i.e.\ on languages, compile-time analysis techniques (in particular for strictness and weight analysis), code generation, and runtime systems. Excluded from this bibliography are publications on special architectures and on garbage collection unless they contain aspects interesting for above areas. Most bibliographic items are listed inclusive their full abstracts. If the bibliography is useful for your work, please be so kind to cite it in resulting publications as Wolfgang Schreiner. Parallel Functional Programming --- An Annotated Bibliography. Technical Report 93-24, RISC-Linz, Johannes Kepler University, Linz, Austria, May 1993. Any comments, corrections and supplements are welcome. Wolfgang -- Wolfgang Schreiner Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Tel: +43 7236 3231 66 Fax: +43 7236 3231 30 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 12 Oct 93 10:17:42 -0400 From: rao@watson.ibm.com Subject: Workshop on Visualization and Machine Vision CALL FOR PAPERS ================================================================= ================================================================= IEEE Workshop on Visualization and Machine Vision The Westin Hotel, Seattle, Washington June 24, 1994 (Note: The workshop is a day after CVPR at the same site. So researchers can stay an extra day and attend the workshop). Sponsored by IEEE Computer Society: Pattern Analysis and Machine Intelligence Technical Committee and Computer Graphics Technical Committee ================================================================= ================================================================= Visualization is a rapidly growing discipline, and has become one of the most important tools of modern computational science. The goal of research in visualization is to extract meaningful information from voluminous datasets through the use of imaging and interactive graphics. This goal has been made feasible by recent advancements in multi-media technology. Computer Vision, on the other hand, is concerned with the automatic interpretation of images. Thus, both disciplines are concerned with computational problems associated with images. The aim of this workshop is to explore the synergy between these two research areas and identify new applications and promising new directions for interdisciplinary research. Some examples of such applications are: automated analysis of flow visualization images, fusion of multiple images and visualization of medical images. In many such applications, computer vision may be used to aid and complement human analysis. For example, computer vision may be applied for selective visualization, where the image display is preceded by image analysis to isolate regions of interest in the data. Such regions of interest could be edges in data, or areas around singularities. Techniques such as edge detection and segmentation could be extended to data that are not necessarily visual, e.g. financial or geographic data. Computer vision could benefit from techniques developed in visualization, such as the fusion of multiple images for display, visualization of reconstruction techniques, display of multi- dimensional vector fields, etc. We invite both theoretical and application oriented papers exploring any aspect of the interaction between these two disciplines. Suggested topics are listed below. This list is not exhaustive and other relevant papers are welcome. SUGGESTED TOPICS Fusion of multiple images Geographical data analysis Flow visualization Medical Imaging Financial data analysis Image databases Multimedia techniques Integration of multiple views Marine imaging Interactive segmentation Visualization of reconstruction techniques Evaluation of visualization techniques 3-d in segmentation for visualization Analysis of test and measurement data Quantitative machine vision techniques PAPER SUBMISSION Four copies of complete manuscript should be received by December 13, 1993 at the address: A. Ravishankar Rao, IBM Research, P.O. Box 218, Yorktown Heights, NY 10598, USA. Please include the following (a) A title page containing the names and addresses of the authors (including e-mail), and abstract of up to 200 words. (b) A second page with title and abstract only (no author names). (c) Paper -- limited to 25 double spaced pages (12 points, 1 inch margins). PROGRAM CHAIR PROGRAM CO-CHAIR A. Ravishankar Rao Ramesh Jain IBM Research Electrical and Computer Engineering Dept. P.O. Box 218 University of California at San Diego Yorktown Hts. NY 10598 La Jolla, CA 92093 rao@watson.ibm.com jain@ece.ucsd.edu PROGRAM COMMITTEE Rabi Dutta, Univ. Massachusetts, Amherst Todd Elvins, U.C. San Diego Thomas Huang, U. of Illinois, Urbana Arie Kaufman, SUNY Stonybrook Shih-Ping Liou, Siemens Inc. Robin Strickland, U. Arizona Demetri Terzopoulos, Univ. Toronto =============================================================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: thompson@aust.inmos.co.uk () Subject: Crossbar switch info wanted Reply-To: thompson@aust.inmos.co.uk () Organization: INMOS Architecture Group A little while ago, someone posted a summary of recently announced crossbar switch chips. To my chagrin, I find I've lost this information, so I would be very grateful if the original poster (or anyone else who kept a copy) could e-mail it to me. Thank you. -- Peter Thompson INTERNET: thompson@inmos.co.uk INMOS Ltd JANET: thompson@uk.co.inmos 1000 Aztec West UUCP: uknet!inmos!thompson Bristol BS12 4SQ, U.K. Telephone: +44 454 616616 FAX: +44 454 617910 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: manu@CS.UCLA.EDU (Maneesh Dhagat) Subject: Parallel Implementations of SETL? Organization: UCLA, Computer Science Department Hi, Are there any parallel implementations of SETL? I know that Paralation Lisp, CM Lisp, SQL, APL, etc. come in a similar class of (collection-oriented) languages, but I want to specifically know if SETL has been implemented on parallel machines. If you can email me any references/ftp sites, it would be much appreciated. --Maneesh Dhagat -- --------------------------------------------------- Maneesh Dhagat (manu@cs.ucla.edu) University of California, Los Angeles, CA 90024 --------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lernst@SSD.intel.com (Linda Ernst) Subject: OS Designers, Intel SSD, Beaverton, OR, USA Organization: Supercomputer Systems Division (SSD), Intel The Supercomputing Systems Division of Intel, has positions available now in Beaverton, Oregon for Senior Software Engineers, Operating Systems. We are a leading supplier of massively parallel supercomputers, which run a fully distributed version of OSF1/AD (Mach microkernel, Unix server) on 1000+ nodes, producing 100s of gigaFLOPS and terabytes of data. Not for the faint of heart :-) Job descriptions are attached. Please mail resumes (please ABSOLUTELY no FAXes, no phone calls, no e-mail): Linda Ernst c/o Intel Corporation Mail Stop CO1-01 5200 N.E. Elam-Young Parkway Hillsboro, OR 97124-6497 =============================================================================== Position #1: Operating System Designer, Memory Management Description: Specify, design, prototype and implement an advanced distributed memory management architecture in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Includes collaboration with with internal and academic applications researchers to provide high performance operating system support for new parallel programming models. Education and Skills: Minimum BSCS, Masters preferred, 6 to 10 years programming experience, 3 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer and scalable operating system experience a plus, experience in the areas of supercomputing a plus. Design experience with memory management required. =============================================================================== Position #2: Operating System Designer, Message Passing Description: Design, prototype and implement message passing-related features, and new message passing protocols, in a Mach/Unix-based operating system for Intel's next generation parallel supercomputer. Education and Skills: Minimum BSCS, Masters preferred, 5 to 8 years programming experience, 2 to 5 years operating system design and development experience. Solid knowledge of high performance system design, Unix internals, microkernel operating systems, distributed and parallel architectures. Multicomputer operating system experience a plus, experience in the areas of supercomputing a plus. Experience with message passing highly desirable. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: suh@rainbow.dcrt.nih.gov (Edward Suh) Subject: Parallel Bin Packing Algorithm Reply-To: suh@rainbow.dcrt.nih.gov Organization: National Institutes of Health I am looking for parallel bin packing algorithms. I would appreciate if anyone send me or point me to references or algorithms. Thanks. ---------------------------------------------------------------------- Edward B. Suh Phone: (301) 480-3835 Bldg. 12A, Room 2029 Internet: suh@alw.nih.gov National Institutes of Health Bethesda, MD 20892 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hksung@helios.ece.arizona.edu (Hongki Sung) Subject: References to parallel algorithms? Organization: U of Arizona Electrical and Computer Engineeering Can anyone help me in finding references for the following parallel algorithms on various interconnection topologies such as Boolean cube, mesh/torus, etc... - FFT - Sorting - Matrix algorithms (sum, multiplication, ...) - and other representative parallel algorithms Thanks in advance. Hongki hksung@ece.arizona.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: karent@quip.eecs.umich.edu (Karen A. Tomko) Subject: Harwell-Boeing Sparse Matrix Collection Organization: University of Michigan EECS Dept., Ann Arbor, MI Is the Harwell-Boeing sparse matrix collection available by anonymous ftp? How about the "Users' Guide for the Harwell-Boeing sparse matrix collection"? Thanks in advance, Karen -- Karen Tomko karent@eecs.umich.edu Graduate Student Research Assistant University of Michigan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jd@viz.cs.unh.edu (Jubin P Dave) Subject: Recursive tree traversal in C* Organization: University of New Hampshire - Durham, NH hello all, i would like to tap into the collective wisdom of the internet to solve a particularly vexing problem that i am faced with. i am trying to implement a radiosity algorithm using C*. i have a BSP tree which i need to traverse to create a front to back list of polygons. As C* lacks parallel pointers i use arrays to immplement my tree. now my problem is that as function calls are scalar code i cannot use the usual "where" control structures and call the same routine recursively. doing so means going into an infinite loop. is there anyway to overcome this ? any and all help will be higihly appreciated. you can mail me at jd@viz.cs.unh.edu thanks jubin __ i plan to live forever or die in the attempt Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 12 Oct 93 21:58:15 PDT From: ahicken@parasoft.com (Arthur Hicken) Subject: Parallel Programming Class Hands-on Parallel/Distributed Programming Courses offered. ParaSoft Corporation, the leader in distributed and parallel computing tools, will conduct a hands on, introductory course on the theory and practice of distributed and parallel computing. This unique, hands-on focus of this course, 75% of the total time, assures that participants will gain a practical understanding of distributed computing applications. Each participant will program on a workstation linked to a network within the lab, to demonstrate and verify theoretical concepts presented in the seminar. This course has been timed to take place after the end of the Cluster Workship at Florida State, so you can plan to attend both if you'd like. Course Goals: Upon completion of the course, the participant will be able to: 1. Set up a simple job dispatcher with dynamic load balancing. 2. Build an application which runs on multiple platforms. 3. Implement process communication for tightly coupled applications. Course Content: 1. Theory - Introduction to parallel/distributed computing, programming models, programming environments. 2. Labs - Machine setup, Running parallel/distributed programs, basic parallel/distributed I/O, message passing, global operations, data decomposition, heterogeneous computing. Prerequisites: 1. Working knowledge of C or Fortran. 2. Familiarity with Unix 3. Strong desire to learn about distributed computing. Dates : Friday, December 10 - Sunday, December 12 Location : Florida State University, Tallahassee, Florida Instructors: Dr. Adam Kolawa - World wide expert and lecturer on distributed computing Lab Setup: Each participant will develop distributed applications at a workstation on a network within the lab. Cost: $495 - includes a complete set of tutorial materials and Express manuals. Lunches and the evening receptions are included. Cost can be credited toward purchase of Express, or toward application development services. Educational Discount: Only $200 for university personnel and students. Participation: Strictly limited to 15 people. Please call or send email to parasoft early to reserve your space. Applications are accepted on a first-come, first-serve basis. Additional Courses are available to graduates from the Level I course: Level II - 3 days. Covers parallel/distributed: debugging, graphics, performance monitoring, parallelization techniques, asynchronous programming, basic parallel/distributed application skeletons, etc. Level III - 3 days. Covers application of topics learned in the level I and II courses by applying these techniques on real applications. A copy of the transparencies used in the course can be obtained from the ParaSoft anonymous ftp server at ftp.parasoft.com (192.55.86.17) in the /express/classes directory. For more information contact: ParaSoft Corporation 2500 E. Foothill Blvd. Pasadena, CA 91107-3464 voice: (818) 792-9941 fax : (818) 792-0819 email: info@parasoft.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Sender: news@ford.ee.up.ac.za (NetNews Daemon) From: reyn-jj@mella.ee.up.ac.za Subject: REAL TIME compression algorith I'm looking for a real time compression algorith or sorce code. Any help will be appreciate. Thanx Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: debit@lanpc1.univ-lyon1.fr (Naima Debit) Subject: scientific parallel computing workshop Organization: Laboratoire d'Analyse Numerique - PC Linux SCIENTIFIC PARALLEL COMPUTING WORKSHOP ************************************** Thursday, October 21st 1993 *************************** (Seminar room, Building 006-cafeteria) 9:00 - 9:15 Opening by J. F. Maitre, Chairman of URA 740 9:20 - 10:00 M. Cosnard, LIP-ENS of Lyon On the methodologies of parallel programming 10:00 - 10:40 G. Meurant, CEA Saclay CEA/DAM projects in parallel computating 10:40 - 11:00 Coffee Break 11:00 - 11:40 S. Hawkinson, INTEL INTEL Applications ... 11:40 - 12:20 A. Dervieux, INRIA Sophia-Antipolis On the implementation of parallel algorithms in unstructured meshes 12:20 - 14:00 Lunch 14:00 - 14:40 D. Keyes, Old Dominion University & Yale University Parallel Implicit Methods in Computational Fluid Dynamics 14:40 - 15:20 P. Le Tallec, University of Paris Dauphine & INRIA Rocquencourt Domain Decomposition Methods in Mechanics: recent improvements and parallel implementation issues} 15:20 - 16:00 M. Garbey, University of Lyon 1 Domain Decomposition, Asymptotic Analysis and Numerical Simulation on parallel architecture 16:00 - 16:20 Coffee Break 16:20 - 17:10 H.G. Kaper and D. Levine, Argonne National Laboratory Argonne's computational science project in super-conductivity 17:10 - 17:50 J. Periaux, Dassault Aviations Grand Challenge Problem and parallel computing 18:00 Official opening of the "Centre pour le Developpement du Calcul Scientifique Parallele" ***** This conference is open to any scientist with no registration fees. ***** LABORATOIRE D'ANALYSE NUMERIQUE, UNIVERSITE CLAUDE BERNARD LYON 1 Batiment 101 ; 43 bd du 11 Novembre 1918, 69622 Villeurbanne cedex Telephone: 72 44 80 55 ou 72 43 10 93 E-mail: garbey@lan1.univ-lyon1.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: walker@rios2.epm.ornl.gov (David Walker) Subject: SHPCC94 Call for Papers etc. THE 1994 SCALABLE HIGH PERFORMANCE COMPUTING CONFERENCE SHPCC94 DEADLINE FOR EXTENDED ABSTRACTS OF PAPERS: November 1, 1993 KNOXVILLE, TENNESSEE, U.S.A., MAY 23 - 25, 1994 GENERAL CHAIR: Jack Dongarra University of Tennessee and Oak Ridge National Laboratory dongarra@cs.utk.edu 615 974-8296 (fax) PROGRAM CHAIR: David W. Walker Oak Ridge National Laboratory walker@msr.epm.ornl.gov 615 574-0680 (fax) PROGRAM COMMITTEE: David Bailey, NASA Ames Research Center William Gropp, Argonne National Laboratory Rolf Hempel, Gesellschaft fur Mathematik und Datenverarbeitung, Germany Anthony Hey, University of Southampton Charles Koelbel, Rice University Steve Otto, Oregon Graduate Institute Cherri Pancake, Oregon State University Paul Pierce, Intel Supercomputer Systems Division Sanjay Ranka, Syracuse University Gary Sabot, Thinking Machines Corporation Robert Schreiber, NASA RIACS Bernard Tourancheau, LIP, CNRS, Ecole Normale Superieure de Lyon, France Robert van de Geijn, University of Texas, Austin Katherine Yelick, University of California, Berkeley SPONSORED BY: IEEE Computer Society The 1994 Scalable High Performance Computing Conference (SHPCC94) is a continuation of the highly successful Hypercube Concurrent Computers and Applications (HCCA), and Distributed Memory Concurrent Computing (DMCC) conference series. SHPCC takes place biennially, alternating with the SIAM Conference on Parallel Processing for Scientific Computing. INVITED SPEAKERS: Guy Blelloch, Carnegie Mellon University Phil Colella, University of California, Berkeley David Culler, University of California, Berkeley Monica Lam, Stanford University Marc Snir, IBM T.J. Watson Research Center SHPCC94 will provide a forum in which researchers in the field of high performance computing from government, academia, and industry can presents results and exchange ideas and information. SHPCC94 will cover a broad range of topics relevant to the field of high performance computing. These topics will include, but are not limited to, the following; Architectures Load Balancing Artificial Intelligence Linear Algebra Compilers Neural Networks Concurrent Languages Non-numerical Algorithms Fault Tolerance Operating Systems Image Processing Programming Environments Large-scale Applications Scalable Libraries C++ THE SHPCC94 program will include invited talks, contributed talks, posters, and tutorials. SHPCC94 will take place at the Holiday Inn Convention Center in Knoxville, Tennessee. Registration details will be made available later. Instructions for Submitting Papers ---------------------------------- Authors are invited to submit contributed papers describing original work that makes a significant contribution to the design and/or use of high performance computers. All contributed papers will be refereed by at least three qualified persons. All papers presented at the conference will be published in the Conference Proceedings. 1. Submit 3 copies of an extended abstract of approximately 4 pages. Abstracts should include a succinct statement of the problems that are considered in the paper, the main results achieved, an explanation of the significance of the work, and a comparison with past research. To ensure a high academic standard, the abstracts of all contributed papers will be refereed. DEADLINE FOR EXTENDED ABSTRACTS OF PAPERS: November 1, 1993 Authors will be notified of acceptance by January 14, 1994 DEADLINE FOR FINAL CAMERA-READY COPY OF COMPLETE PAPER: February 14, 1994 The final complete paper should not exceed 10 pages. 2. Each copy of the extended abstract should have a separate title page indicating that the paper is being submitted to SHPCC94. The title page should also give the title of the paper and the names and addresses of the authors. The presenting author, and the author to whom notification of acceptance should both be sent, should be clearly indicated on the title page, together with their phone, fax, and email. 3. Extended abstracts should be sent to the Program Chair, David Walker, at the address above. Poster Presentations -------------------- Poster presentations are intended to provide a more informal forum in which to present work-in-progress, updates to previously published work, and contributions not suited for oral presentation. To submit a poster presentation send a short (less than one page) abstract to the Program Chair, David Walker, at the address above. Poster presentations will not appear in the Conference Proceedings. DEADLINE FOR SHORT ABSTRACTS OF POSTERS: November 1, 1993 Poster presenters will be notified of acceptance by January 14, 1994 Abstracts for poster presentations must include all the information referred to in (2) above. If this will not fit on the same page as the abstract, then a separate title page should be provided. Instructions for Proposing a Tutorial ------------------------------------- Half-day and full-day tutorials provide opportunity for a researchers and students to expand their knowledge in specific areas of high performance computing. To propose a tutorial, send a description of the tutorial and its objectives to the Program Chair, David Walker, at the address above. The tutorial proposal should include: 1. A half-page abstract giving an overview of the tutorial. 2. A detailed description of the tutorial, its objectives, and intended audience. 3. A list of the instructors, with a brief biography of each. All tutorials will take place on May 22, 1994. DEADLINE FOR TUTORIAL PROPOSALS: November 1, 1993 Tutorial proposers will be notified of acceptance by January 14, 1994 For further information contact David Walker at walker@msr.epm.ornl.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: olsen@HING.LCS.MIT.EDU (James Olsen) Newsgroups: comp.arch,comp.parallel Subject: Optical interconnect thesis available via ftp Organization: MIT Laboratory for Computer Science For those who might be interested, my recent MIT PhD thesis, "Control and Reliability of Optical Networks in Multiprocessors", is now available in both Postscript and LaTeX formats via anonymous FTP. The files are on hing.lcs.mit.edu, in the directory pub/olsen. Here is a shortened abstract: -------------------------------------------------------------- Control and Reliability of Optical Networks in Multiprocessors James J. Olsen - Ph.D. Thesis - MIT EECS Dept. - May 1993 Optical communication links have great potential to improve the performance of interconnection networks within large parallel multiprocessors, but semiconductor laser drive control and reliability problems inhibit their wide use. This thesis describes a number of system-level solutions to these problems. The solutions are simple and inexpensive enough to be practical for implementation in the thousands of optical links that might be used in a multiprocessor. Semiconductor laser reliability problems are divided into two classes: transient errors and hard failures. It is found that for transient errors, the computer system might require a very low bit-error-rate (BER), such as 10^-23, without error control. Optical links cannot achieve such rates directly, but a much higher link-level BER (such as 10^-7) would be acceptable with simple error detection coding. A feedback system is proposed that will enable lasers to achieve these error levels even when laser threshold current varies. Instead of conventional techniques using laser output monitors, a software-based feedback system can use BER levels for laser drive control. Experiments demonstrate that this method is feasible, and has other benefits such as laser wearout tracking and optical loss compensation. For hard failures, one can provide redundant spare optical links to replace failed ones. Unfortunately, this involves the inclusion of many extra, otherwise unneeded optical links. A new approach, called `bandwidth fallback', is presented which allows continued use of partially-failed channels while still accepting full-width data inputs, providing high reliability without any spare links. It is concluded that the drive control and reliability problems of semiconductor lasers should not bar their use in large scale multiprocessors, since inexpensive system-level solutions to them are possible. Thesis Supervisor: Anant Agarwal -- Jim Olsen - olsen@cag.lcs.mit.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: hhlo@IASTATE.EDU (Hsiung-Hui Lo) Subject: Re: IEEE Computer Special Issue on Associative Processing and Processors - NEW DATES Reply-To: hhlo@IASTATE.EDU (Hsiung-Hui Lo) Organization: Iowa State University Hi, I am working on sparse linear system. Anybody knows how to generate spare matrices on parallel machine? I appreciate if you give me that information. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cosc19ub@rosie.uh.edu (Li, Sam) Subject: topological properties for various internetworking topologies Organization: University of Houston Hello netters, I am doing some survey report about comparing topological properties for various internetworking topologies, e.g. star, cube, mesh, etc.. I would appreciate any pointers or references concerning this. Please response via email. Sam Li cosc19ub@jetson.uh.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel from: valerie@cri.ens-lyon.fr (valerie roger) subject: ppl contents organization: ecole normale superieure de lyon Please find hereafter the contents of Parallel Processing Letters, volumes 1 and 2. Valerie Roger, LIP, ENS Lyon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} {\bf PARALLEL PROCESSING LETTERS}\\ Parallel Processing Letters (PPL) aims to disseminate rapidly results in the field of parallel processing in the form of short letters. It will have a wide scope and cover topics such as the design and analysis of parallel and distributed algorithms, the theory of parallel computation, parallel programming languages, parallel programming environments, parallel architectures and VLSI circuits. Original results are published, and experimental results if they contain an analysis corresponding to an abstract model of computation. PPL is be and ideal information vehicle for recent high quality achievements. Information can be obtained form the Editor in Chief, Professor Michel Cosnard at cosnard@lip.ens-lyon.fr \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[11pt,french,a4]{letter} %\title{\bf} %\author{\mbox{}} %\date{\mbox{}} %\pagestyle{empty} %\maketitle %\parindent=0cm \begin{document} \begin {center} {\bf PARALLEL PROCESSING LETTERS}\\ CONTENTS - Volume 1 - Number 1 - September 1991 \end {center} \begin {tabular} {p{145mm}r} Editorial Note\\ M. Cosnard & 1\\ \\ Determining the Model in Parallel\\ E.A. Albacea & 3\\ \\ Maintaining Digital Clocks in Step\\ A. Arora, S. Dolev, M. Gouda & 11\\ \\ SyNsthesis of Processor Arrays for the Algebraic Path Problem: Unifying Old\\ Results and Deriving New Architectures\\ T. Risset, Y. Robert & 19\\ \\ On the Power of Two-Dimensional Processor Arrays with Reconfigurable Bus\\ Systems\\ S. Olariu, J. Schwing, J. Zhang & 29\\ \\ Additive Spanners for Hypercubes\\ A.L. Liestman , T.C. Shermer & 35\\ \\ Subcube Embeddability of Folded Hypercubes\\ S. Latifi & 43\\ \\ An O(n) Parallel Algorithm for Solving the Traffic ContrOl Problem\\ on Crossbar Switch Networks\\ K.T. Sun, H.C. Fu & 51\\ \\ Modelling a Morphological Thinning Algorithm for Shared Memory\\ SIMD Computers\\ A. Datta, S.V. Joshi, R.N. Mahapatra & 59\\ \\ A Note on Off-Line Permutation Routing on a Mesh-Connected Processor\\ Array\\ D. Krizanc & 67\\ \end {tabular} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[11pt,french,a4]{letter} \title %\author{\mbox{}} %\date{\mbox{}} %\pagestyle{empty} \maketitle \parindent=0cm \begin {document} \begin {center} {\bf PARALLEL PROCESSING LETTERS\\ CONTENTS - Volume 1 - Number 2 - December 1991} \end {center} \begin {tabular} {p{145mm}r} Editorial Note\\ M. Cosnard & 71\\ \\ Linear Scheduling is Nearly Optimal\\ A. Darte, L. Khachiyan, Y. Robert & 73\\ \\ Specifying Control Signals for Systolic Arrays by Uniform Recurrence\\ Equations\\ J. Xue & 83\\ \\ Conflict-Free Strides for Vectors in Matched Memories\\ M. Valero, T. Lang, JM. Llaberia, M. Peiron, JJ. Navarro, E. Ayguadi & 95\\ \\ On the Real Power of Loosely Coupled Parallel Architectures\\ M. Cosnard, A. Ferreira & 103\\ \\ Large Sorting and Routing Problems on the Hypercube and Related Networks\\ G. Manzini & 113\\ \\ On Lower Bounds for the Communication Volume in Distributed Systems\\ UA. Ranawake, PM. Lenders, S.M. Goodnick & 125\\ \\ An Improved Maximal Matching Algorithm\\ SB. Yang, SK. Dhall, S. Lakshmivarahan & 135\\ \\ Constant Delay Parallel Counters\\ SG. Akl, T. Duboux, I. Stojmenovic & 143\\ \\ Ranking on Reconfigurable Networks\\ Y. Ben-Asher, A. Schuster & 149\\ \\ On Processing Multi-Joins in Parallel Systems\\ KL. Tan, H. Lu & 157\\ \\ Author Index - Volume 1 (1991) & 165\\ \end {tabular} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[11pt,french,a4]{letter} %\title{\bf} %\author{\mbox{}} %\date{\mbox{}} %\pagestyle{empty} %\maketitle %\parindent=0cm \begin{document} \begin {center} {\bf PARALLEL PROCESSING LETTERS}\\ CONTENTS - Volume 2 - Number 1 - March 1992 \end {center} \begin {tabular} {p{145mm}r} Editorial Note\\ M. Cosnard & 1\\ \\ Construction of Large Packet Radio Networks\\ JC Bermond, P. Hell, JJ. Quisquater& 3\\ \\ The Cube-Connected Cycles Network is a Subgraph of the Butterfly Network\\ R. Feldmann, W. Unger & 13\\ \\ Distributed Deadlock Detection Algorithms\\ M. Flatebo, A.K. Datta & 21\\ \\ Optimal Tree Ranking is in NC\\ P. de la Torre, R. Greenlaw, TM. Przytycka & 31\\ \\ Descriminating Analysis and its Application to Matrix by Vector \\ Multiplication on the PRAM\\ LF. Lindon & 43\\ \\ Performance Estimation of LU Factorisation on Message Passing\\ Multiprocessors\\ BV. Purushotham, A. Basu, PS. Kumar, LM. Patnaik & 51\\ \\ The Combination Technique for the Sparse Grid Solution of PDE's on Multiprocessor Machines\\ M. Griebel & 61\\ \\ Refined Mark(s)-Set-Based Backtrack Literal Selection for AND Parallelism in Logic Programs \\ DH. Kim, KM. Choe, T. Han & 71\\ \\ Mapping Binary Precedence Trees to Hypercubes\\ S. Ullman, B. Narahari & 81\\ \\ Optimal Subcube Assignment for Partitionable Hypercubes\\ R. Krishnamurti, B. Narahari & 89\\ \\ Synthesizing Linear Systolic Arrays for Dynamic Programming Problems\\ JF. Myoupo & 97\\ \end {tabular} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[11pt,a4,french]{article} \title{\bf PARALLEL PROCESSING LETTERS\\ CONTENTS - Volume 2 - Number 2 \& 3 - September 1992} \author{\mbox{}} \date{\mbox{}} \pagestyle{empty} \parindent=0cm \begin{document} \maketitle %\begin {center} %{\bf PARALLEL PROCESSING LETTERS\\ %CONTENTS - Volume 2 - Number 2 \& 3 - September 1992} %\end {center} \begin {tabular} {p{145mm}r} Editorial Note\\ M. Cosnard & 111\\ \\ PPL Special Issue on Algorithmic and Structural Aspects of Interconnection\\ Networks : Call for Papers & 115\\ \\ Reconfigurable Parallel Computer Architecture Based on Wavelength-Division\\ Multiplexed Optical Interconnextion Network\\ KA. Aly, PW. Dowd & 117\\ \\ A Reconfiguration Technique for Fault Tolerance in a Hypercube\\ S Rai, JL. Trahan & 129\\ \\ Load Balancing Strategies for Massively Parallel Architectures\\ A. Corradi, L. Leonardi, F. Zambonelli & 139\\ \\ Embedding Mesh in a Large Family of Graphs\\ WJ. Hsu, CV. Page & 149\\ \\ A Parallel Algorithm for Forest Reconstruction\\ S. Olariu, Z. Wen & 157\\ \\ Performance Evaluation of Multicast Wormhole Routing in 2D-Torus\\ Multicomputers\\ CS. Yang, YM Tsai, CY. Liu & 161\\ \\ \end{tabular} \begin{tabular}{p{145mm}r} A Self-Stabilizing Distributed Algorithm to Construct BFS Spanning\\ Trees of a Symmetric Graph\\ S. Sur, PK. Srimani & 171\\ \\ Approximating Maximum 2-CNF Satisfiability\\ DJ. Haglin & 181\\ \\ DTML is Logspace Hard Under $NC^{1}$ Reductions\\ R. Sarnarth & 189\\ \\ Transitive Closure in Parallel on a Linear Network of Processors\\ M. Gastaldo, M. Morvan, JM. Robson & 195\\ \\ The Pairwise Sorting Network\\ I. Parberry & 205\\ \\ Periodic Sorting on Two-Dimensional Meshes\\ M. Kutylowski, R. Wanka & 213\\ \\ Efficient K-Selection in Hypercube Multiprocessors\\ P. Berthomi & 221\\ \\ A Simple Optimal Systolic Algorithm for Generating Permutations\\ SG. Akl, I. Stojmenovic & 231\\ \\ Systolic Generation of Combinations from Arbitrary Elements\\ H. Elhage, I. Stojmenovic & 241\\ \\ Convex Polygon Problems on Meshes with Multiple Broadcasting\\ D. Bhagavathi, S. Olariu, JL. Schwing, J. Zhang & 249\\ \\ A Parallel Processing Model for Real-Time Computer Vision-Aided Road\\ Traffic Monitoring\\ AT. Ali, EL. Dagless & 257\\ \\ A Model of Speculative Parallelism\\ WF. Wong, CK. Yuen & 265\\ \\ Unimodularity and the Parallelization of Loops\\ M. Barnett, C. Lengauer & 273\\ \\ An Improvement of Maekawa's Mutual Exclusion Algorithm to Make it\\ Fault-Tolerant\\ A. Bouabdallah, JC. Konig & 283\\ \\ On Asynchronous Avoidance of Deadlocks in Parallel Programs\\ AE. Doroshenko & 291\\ \end {tabular} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[11pt,french,a4]{letter} %\title{\bf} %\author{\mbox{}} %\date{\mbox{}} %\pagestyle{empty} %\maketitle %\parindent=0cm \begin{document} \begin {center} {\bf PARALLEL PROCESSING LETTERS}\\ CONTENTS - Volume 2 - Number 4 - December 1992 \end {center} \begin {tabular} {p{145mm}r} Editorial Note\\ M. Cosnard & 299\\ \\ Constructing An Exact Parity Base is in $RNC^{2}$\\ G. Galbiati, F. Maffioli & 301\\ \\ Parallel Constructions of Heaps and Min-Max Heaps\\ S. Carlsson, J. Chen & 311\\ \\ Computation List Evaluation and Its Applications\\ EA. Albacea & 321\\ \\ Data Parallel Computation of Euclidean Distance Transforms\\ T. Bossomaier, N. Isidoro, A. Loeff & 331\\ \\ Two Selection Algorithms on A Mesh-Connected Computer\\ BS. Chlebus & 341\\ \\ Channel Classes: A New Concept for Deadlock Avoidance in\\ Wormhole Networks\\ J. Duato & 347\\ \\ Broadcasting Time in Sparse Networks with Faulty Transmissions\\ A. Pelc & 355\\ \\ A Low Overhead Schedule for A 3D-Grid Graph\\ E. Bampis, JC Konig, D. Trystram & 363\\ \\ Multi-Rate Arrays and Affine Recurrence Equations\\ PM. Lenders & 373\\ \\ Simulation of Genetic Algorithms on MIMD Multicomputers\\ I. De Falco, R. Del Balio, E. Tarantino, R. Vaccaro & 381\\ \\ Parallel Buddy Memory Management\\ T. Johnson, TA. Davis & 391\\ \\ Author Index - Volume 2 (1992) & 399\\ \end {tabular} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% --- Valerie ROGER Laboratoire de l'Informatique du Parallelisme Ecole Normale Superieure 46, allee d'Italie 69364 LYON CEDEX 07 FRANCE Phone : (+33) 72 72 80 37 EARN/BITNET : valerie@frensl61.bitnet Fax : (+33) 72 72 80 80 FNET/EUNET/UUNET : valerie@ensl.ens-lyon.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wittmann@Informatik.TU-Muenchen.DE (Marion Wittmann) Subject: classification of algorithms Organization: Technische Universitaet Muenchen, Germany I'm trying to classify parallel algorithms. Especially I'm interested in their characteristial SVM-properties. Therefor I need some literature about application schemes and classification of algorithms, not only of parallel ones. If you know any literature dealing with this subject, please mail wittmann@informatik.tu-muenchen.de Thanks for your help Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super,ch.general From: rehmann@cscs.ch (Rene M. Rehmann) Subject: 2nd CFP: IFIP WG10.3 conference on programming environments, Switzerland Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland 2nd Announcement CALL FOR PAPERS IFIP WG10.3 WORKING CONFERENCE ON PROGRAMMING ENVIRONMENTS FOR MASSIVELY PARALLEL DISTRIBUTED SYSTEMS April 25 - 30, 1994 Monte Verita, Ascona, Switzerland Massively parallel systems with distributed resources will play a very important role for the future of high performance computing. One of the current obstacles of these systems is their difficult programming. The proposed conference will bring together active researchers who are working on ways how to help programmers to exploit the performance potential of massively parallel systems. The working conference will consist of sessions for full and short papers, interleaved with poster and demonstration sessions. The Conference will be held April 25 - 30, 1994 at the Centro Stefano Franscini, located in the hills above Ascona at Lago Maggiore, in the southern part of Switzerland. It is organized by the Swiss Scientific Computing Center CSCS ETH Zurich. The conference is the forthcoming event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) on Programming Environments for Parallel Computing. The conference succeeds the 1992 Edinburgh conference on Programming Environments for Parallel Computing. SUBMISSION OF PAPERS Submission of papers is invited in the following areas: -- Programming models for parallel distributed computing -- Computational models for parallel distributed computing -- Program transformation tools -- Concepts and tools for the design of parallel distributed algorithms -- Reusability in parallel distributed programming -- Concepts and tools for debugging massively parallel systems (100+ processing nodes) -- Concepts and tools for performance monitoring of massively parallel systems (100+ processing nodes) -- Tools for application development on massively parallel systems -- Support for computational scientists: what do they really need ? -- Application libraries (e.g., BLAS, etc.) for parallel distributed systems: what do they really offer ? -- Problem solving environments for parallel distributed programming Authors are invited to submit complete, original, papers reflecting their current research results. All submitted papers will be refereed for quality and originality. The program committee reserves the right to accept a submission as a long, short, or poster presentation paper. Manuscripts should be double spaced, should include an abstract, and should be limited to 5000 words (20 double spaced pages); The contact authors are requested to list e-mail addresses if available. Fax or electronic submissions will not be considered. Please submit 5 copies of the complete paper to the following address: PD Dr. Karsten M. Decker IFIP 94 CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland IMPORTANT DATES Deadline for submission: December 1, 1993 Notification of acceptance: February 1, 1994 Final versions: March 1, 1994 CONFERENCE CHAIR Karsten M. Decker CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8233 fax: +41 (91) 50 6711 e-mail: decker@serd.cscs.ch ORGANIZATION COMMITTEE CHAIR Rene M. Rehmann CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8234 fax: +41 (91) 50 6711 e-mail: rehmann@serd.cscs.ch PROGRAM COMMITTEE Francoise Andre, IRISA, France Thomas Bemmerl, Intel Corporation, Germany Arndt Bode, Technical University Muenchen, Germany Helmar Burkhart, University Basel, Switzerland Lyndon J. Clarke, University of Edinburgh, UK Michel Cosnard, Ecole Normale Superieure de Lyon, France Karsten M. Decker, CSCS-ETH Zurich, Switzerland Thomas Fahringer, University of Vienna, Austria Claude Girault, University P.et M. Curie Paris, France Anthony J. G. Hey, University of Southhampton, UK Roland N. Ibbett, University of Edinburgh, UK Nobuhiko Koike, NEC Corporation, Japan Peter B. Ladkin, University of Stirling, UK Juerg Nievergelt, ETH Zurich, Switzerland Edwin Paalvast, TNO-TPD, The Netherlands Gerard Reijns, Delft University of Technology, The Netherlands Eugen Schenfeld, NEC Research Institute, USA Clemens-August Thole, GMD, Germany Owen Thomas, Meiko, UK Marco Vanneschi, University of Pisa, Italy Francis Wray, Cambridge, UK MONTE VERITA, ASCONA, SWITZERLAND Centro Stefano Franscini, Monte Verita, located in the scenic hills above Ascona, with a beautiful view on Lago Maggiore, has excellent conference and housing facilities for about sixty participants. Monte Verita enjoys a sub-alpine/mediterranean climate with mean temperatures between 15 and 18 C in April. The closest airport to Centro Stefano Franscini is Lugano-Agno which is connected to Zurich, Geneva and Basle and many other cities in Europe by air. Centro Stefano Franscini can also be reached conveniently by train from any of the three major airports in Switzerland to Locarno by a few hours scenic trans-alpine train ride. It can also be reached from Milano in less than three hours. For more information, send email to ifip94@cscs.ch For a PostScript-version of the CFP, anon-ftp to: pobox.cscs.ch:/pub/SeRD/IFIP94/CALL_FOR_PAPERS.ps Karsten M. Decker, Rene M. Rehmann --- Rene Rehmann phone: +41 91 50 82 34 Section for Research and Development (SeRD) fax : +41 91 50 67 11 Swiss Scientific Computing Center CSCS email: rehmann@cscs.ch Via Cantonale, CH-6928 Manno, Switzerland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Thu, 14 Oct 93 09:02:12 EDT From: Peter Su Subject: Re: 12 ways Organization: School of Computer Science, Carnegie Mellon "ENM" == Eugene N Miya writes: ENM> Multilate the algorithm used in the parallel implementation to ENM> match the architecture. Where is the line between reasonable amounts of optimization and 'mutilating the algorithm'? If I take and implementation of an algorithm that does not vectorize well (say), and 'mutilate' it into an implementation that does, is that fair from the standpoint of good benchmarking? Say I do the same thing, but hack the code so it uses the CM-5 vector units, which are not yet well supported by all of TMC's compilers. Is that fair? Or, do I, for the purposes of benchmarking, have to regard the vendor's brain-dead compiler as part of the system. Aren't we trying to figure out how good the *hardware* is, not the hardware+compiler? Why benchmark a parallel computer on code that doesn't parallelize well? Enquiring minds want to know. Pete -- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Thu, 14 Oct 93 08:18:55 -0700 From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: Attribution (ENM) While I have the comp.bench FAQ, the proper attribution is David Bailey. I suggest finding the reference for a further elaboration. You can also contact Dr. Bailey. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: Help---references to CORDIC algorithm Date: Fri, 15 Oct 1993 00:32:11 JST From: Shigeru Ishimoto Dear Grouper, I am looking for the paper on CORDIC algorithm which discovered in the period 1960-1970. The algorithm was discovered again by Dr. Richard Feynman in 1980's. Could anyone give me information. Thanks, ----- _____ | A I S T Shigeru Ishimoto (ishimoto@jaist.ac.jp) | HOKURIKU 18-1 Asahidai Tatsunokuchichou Nomigun Ishikawaken Japan o_/ 1 9 9 0 Japan Advanced Institute of Science and Technology,Hokuriku Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: fsang@kira.lerc.nasa.gov (Angela Quealy) Newsgroups: comp.parallel.pvm,comp.parallel Subject: other DQS users? Organization: NASA Lewis Research Center [Cleveland, Ohio] We recently acquired DQS for use at our site, and we are trying to come up with a policy for its use on our dedicated cluster of 32 IBM RS6000s. Our users will submit a variety of jobs to this test-bed environment, including test/debug runs, benchmark runs, and production jobs which could run for a full month or more. I was wondering what other sites are using DQS, what your experience has been so far, and what kind of queue configuration/policy you are using. Also, in what kind of environment are you running DQS? (a dedicated cluster, or a loose cluster of individually-owned workstations?) Angela Quealy quealy@lerc.nasa.gov -- *********************************************************************** * Angela Quealy quealy@lerc.nasa.gov * * Sverdrup Technology, Inc. (216) 977-1297 * * NASA Lewis Research Center Group * *********************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cosc19ub@rosie.uh.edu (Li, Sam) Subject: topological properties for different internetworking topologies Organization: University of Houston Hello netters, I am doing a survey report for comparing topological properties for various internetworking topologies. Any pointers or references concerning this will be appreciated. Sam Li cosc19ub@jetson.uh.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From hecht@oregon.cray.com Thu Oct 14 15:14:13 1993 From: hecht@oregon.cray.com Subject: Delivering Parallel Performance It seems that people talking about parallel performance often confuse/hide the important low details that provide performance. Performance on parallel programs depends on several key factors regardless of shared memory or distributed memory implementation. These factors are: 1) Memory bandwidth a) Memory Latency b) Memory Bandwidth/Communication Bandwidth 2) Parallel Coordination a) Updates to Global data b) Barrier synchronization 3) Program Implementation a) Compute Intensity (ratio of ops/word) b) Amount of Parallel synchronization Looking at memory latency we find shared memory latency 10 to 10,000 times LESS to non-local data than distributed or cluster resources. This overhead factor, combined with the easier shared memory programming model, allows one to achieve greater relative performance. Among commercial shared memory systems the Cray APP is only superseded in this measure by the Cray C90. SHARED MEMORY LATENCY ===================== Machine CPUS CPU type Year Measured Latency --------------- ---- ----------- ---- ---------------- CRAY 1 1 proprietary 1976 188 nanoseconds VAX 11/780 1 proprietary 1978 1200 nanoseconds Cyber 205 1 proprietary 1981 1200 nanoseconds FPS-164 1 proprietary 1981 543 nanoseconds CRAY X-MP 2 proprietary 1982 171 nanoseconds FPS-264 1 proprietary 1984 159 nanoseconds Convex 210 1 proprietary 1987 440 nanoseconds Convex 240 4 proprietary 198? 440 nanoseconds CRAY Y-MP 8 proprietary 1988 150 nanoseconds CRAY S-MP 8 SPARC 1989 900 nanoseconds CRAY Y-MP/C90 16 proprietary 1991 100 nanoseconds CRAY APP 84 Intel i860 1991 150 nanoseconds SGI Challenge 18 Mips R4000 1993 853 nanoseconds Cray APPs can and are cluster together by the medium of HIPPI. Such a cluster is presented for comparison with other diverse distributed memory systems. Distributed Memory/Cluster Memory ================================= MEASURED Perf Peak Perf ============= =========== Machine BW/PE Lat BW/PE Lat Source cpus MB/s usec MB/s usec ---------------- ---- ------ ------ ------ ------ ------------------- KSR 32pe 32 19 7 HSpeed Computing Conf CRAY APP-cluster 1008 92 9 100 0.2 Compcon paper '93 Meiko CS-2 ? 44 25 10 OSU RFP58030026 pub. info KSR 1088pe 1008 5 25 HSpeed Computing Conf Intel Delta 240 7 45 30 0.15 joel@SSD.intel.com '93 RS/6000 IBM V-7 ? 5 140 express Newsletter '93 Convex HP/Meta-1 ? 12 150 cicci@hpcnnn.cerh.ch '93 Intel XP/S ? 14 159 OSU RFP58030026 pub. info nCube/2 ? 2 154 ruehl@iis.ethz.ch '93 IBM SP1 8-64 64 4 220 40 0.5 elam@ibm.com '93 RS/6000 bit-3 ? 10 240 express Newsletter '93 RS/6000 ethernet ? .1 3500 express Newsletter '93 Definitions ----------- us - micro seconds (10^-6 sec) BW - Bandwidth (measured in MBytes/sec) MB/s - MegaBytes/sec This memory/communication latencies are the bottleneck on the overheads associated with parallelism (barriers, critical sections, shared updates, etc.) And this directly affects performance on real algorithms and the speedups that can be obtained. =========================================================================== Pat Hecht Cray Research Superservers, Inc. 919-544-6267 hecht@cray.com =========================================================================== Other information ----------------------------------------------------------------------- * HP Meta-1 (7100 chip, FDDI connection), 11.5 MB/s on packets of at least 1kb. * CRAY APP (i860 based, each CRAY APP has 84 PE, up to 12 systems in a cluster, for up to 1008 processors) * RS/6000 IBM v-7 switch * Hspeed Computing Conf = The Conference on High Speed Computing 3/29/93 * OSU RFP58030026 = part of an RFP for a computer system, by Oregon Law this info is part of the public record APP Background (for those who don't know, ignore if you know) ------------------------------------------------------------- Up to 84 processors (i860) per APP module flat shared memory (equal access) via crossbar technology ANSI HIPPI ports for clustering or networking and VME for I/O subsystem low parallel overheads in a FORTRAN or C programming environment Peaks Rates (6 Gflops 32-bit, 3 Gflops 64-bit) (it really sustains Gflops on lots of stuff - FFTs, Seismic, radar, image processing, solvers, etc.) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cwpjr@dcache.uucp (Clyde Philips) Subject: Re: Massive Parallel Processors References: <1993Oct8.183349.26884@hubcap.clemson.edu> Organization: data-CACHE Corporation, Boise, Idaho We here at Data-CACHE sell massively parallel db engines primarily to offload mainframe db's for daily Decision Support. Cheers, Clyde Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kasyapa balemarthy Subject: Information wanted on Warp and iWarp Reply-To: kasyapa balemarthy Organization: Univ. of Notre Dame References: <93-10-031@comp.compilers> Hi netters, I want some information on the Warp/iWarp processor array developed by Carnegie-Mellon Univ & Intel. Can somebody post addresses of ftp sites or other electronic sources where I can find info about it? Thanks in advance, Kasyapa -- Kasyapa Balemarthy kbalemar@chopin.helios.nd.edu University of Notre Dame Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: leclerc@cis.ohio-state.edu (Anthony Leclerc) Subject: Any good simulators Organization: The Ohio State University Dept. of Computer and Info. Science [Check parlib] I'm teaching an "Architecture of Advanced Computer Systems" course in the Spring semester. Does anyone know of good simulators which are publically avaiable? Sincerely...Tony Leclerc Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: elias@TC.Cornell.EDU (Doug Elias) Newsgroups: comp.sys.super,comp.arch,comp.parallel.pvm,comp.parallel Subject: Parallel Programming Environment: Update#2 Organization: Software and Consulting Support Group, Cornell Theory Center, C.U. The PPE survey, available via anonymous-ftp to theory.tc.cornell.edu as /pub/PPE_Survey, has had a couple of lines added asking for the total amount of time required to complete the survey (the average is now ~50 minutes), and for a self-rating of parallelism-expertise. Any other suggestions as to additions/changes are very welcome. "Vendors" have been noticably absent from the respondents, but shouldn't feel unwelcome: you folks, as part of your work, necessarily port or help port applications to your products, and this makes you "developers", as far as i'm concerned. That does still exclude pure-marketeers and sales-droids, though...(not too) sorry. i've received quite a few more requests for the survey results: i'll be posting whatever i've got sometime in December, after having presented it at both SC'93 and CW'93. There have also been quite a few expressions of sympathy for both the amount of work involved and for the likely lack-of-response from the bulk of our Club...i appreciate the former, and bow to the latter -- boy, talk about "pulling teeth", getting folks "in the know" to share their insights is definitely a primo job -- for a masochist! C'mon, guys...it doesn't take that much time out of your week, and i've got fistfuls of messages indicating how many people out there would really appreciate your opinions! Now for some flamebait: One of the responses i got, from a gov't lab employee, had some things to say that i felt might be of interest, so i got permission to include it, as long as it wasn't signed: >I thought you should be aware that we have more than 50 people here in >my lab developing parallel applications, and I don't think that any of >the methods that they are using meet the requirements for your survey, >even after relaxing them. This is because all development is being >done for Intel Paragon, Intel iPSC/860, nCUBE-2, and shared memory >Crays, all using vendor-specific syntax. For message passing codes, >it is generally an easy task to change from one flavor to another >(provided you don't care about performance). It is amusing that the >things you mention (Express, ISIS, Network-Linda, p4, PVM, etc) are >either things I never heard of, not available on our machines, would >just offer an alternative to the vendor syntax, or else pitifully >inadequate for the architecture. It makes me wonder what useful >information you will get out of your survey, because I feel you may >miss a very large (and perhaps the largest) segment of the MP user >community - those looking for performance. ...to which i responded, in part: >...i just wish that we all of us had the luxury >of being able to essentially ignore all issues except "raw performance", >but your earlier comments now having me wondering just how many of us >actually ARE in that boat -- you seem to feel that it will be a >sizable portion, if not a majority. It would make sense that those of >us in the other boat(s) would not really have much contact with "you", >as our sets of defining concerns would be so minimally overlapping. ...and received back the following: >I guess I misspoke. We don't ignore all issues except raw performance. >Everybody assigns weights to the different factors such as > >- performance >- portability >- general ease of use >- availability of tools (performance monitoring, debugging, etc) >- familiarity >- etc...? > >We have BIG weights on performance and portability, which has led us >to favor message passing C or Fortran for distributed memory >environments. There are a variety of projects about other paradigms >going on that we keep some minimal knowledge of, such as Split-C, HPF >(soon to be renamed LPF), multi-threaded machines, and others. When >they become reality as products, maybe some people will switch. > >The only reason to go to more processors is > >1. doing a computation you could not do otherwise in a finite amount of > time (i.e., bigger problems). >2. doing the same problems faster. > >If Parallel computing does not meet one of these goals, then it is >nothing more than an academic exercise (which I occasionally do too!). >There are numerous examples of codes that people tried to parallelize >only to discover (even on an MP machine) that communication limited the >performance gains. Our Paragon can send data at 40 microsecond latency >and 200 megabyte bandwidth (including software overhead), but even this >is not good enough for some things. It will be a long time before LANS >or WANS can challenge that. > >I would like to see more cooperation between the distributed computing >world and the MP world, but so far they have not had too much success. >The only overlap that I have seen so far has been some minimal support >for PVM on MP machines, but having looked at the code for PVM, you might >as well throw it away as a first step in porting to MP machines. That >leaves only the syntax and semantics, which are also questionable in >light of the MPI effort. > >Heterogeneous computing is something we care about occasionally, partly >because you want to use the right tool for the right job. For example >we routinely send data from an MP machine to a large SGI for rendering >right now, but are considering pushing the rendering back to the MP >machine because the SGI can't keep up with the flow of data. Someday >maybe they'll migrate back. i include these fragments in order to be sure the context is complete, please forgive the length. These comments leave me with the following questions: 1) How prevalent is the attitude that the environments i mentioned as examples of available PPEs (Express, ISIS, Network-Linda, p4, PVM, etc) are either "just alternatives to the vendor syntax" (i.e., offer nothing significant over-and-above what is "natively" available), and/or "are pitifully inadequate for the architecture"? 2) How much agreement is there with the identification of "performance" as the primary concern? More-to-the-point, how much MORE performance would you require in order to offset how much MORE difficulty-of-use? Or lack-of-portability? 3) If "ease-of-use" is not maximized, how hard will it be to gain more of a foothold with the "dusty-deck", "just give me more Crays" crowd? 4) i get a very strong sense of "if it can't do it now, forget it!" ... on the other hand, PPEs have shown, in my opinion, large increases in performance (computational and communication) over the last 3-5 years, and, with enough interest and pushing, will continue to make significant advances. The potential, to me, is sufficient to warrant continued involvement...but is this a very common attitude? Or are there distinct groups that collectively feel one way or the other, and, if so, why? 5) Is something like PVM really such a bad fit to an MP like the Paragon? If so, why? i'd guess "unnecessary distributed overhead not required by an MP"... 6) How important is the prospect of "heterogeneous computing", and how much effort should be put into preparing applications for it NOW, regardless of the relative inefficiency of the current tools? 7) How likely is it that the MPI standard will result in "native" implementations of MPI, rather than interfaces to it from PPEs like PVM, p4, etc.? Does MPI sound the deathknell for the current crop of non-native PPEs? Thanks for your attention, and, please, download the survey and spend some time sharing your opinions with the rest of us. doug -- # ____ |Internet: elias@tc.cornell.edu #dr _|_)oug|USmail: Sci.Comp.Support/Cornell Theory Center # (_| | 737 TheoryCtrBldg/C.U./Ithaca/N.Y./14853-3801 # (_|__ |MaBelle: 607-254-8686 Fax: 607-254-8888 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stratton@dcs.warwick.ac.uk (Andrew Stratton) Subject: `Twelve ways to fool...' reference and a short version. Message-ID: <1993Oct15.133237.1096@dcs.warwick.ac.uk> Sender: news@dcs.warwick.ac.uk (Network News) Nntp-Posting-Host: box Organization: Department of Computer Science, Warwick University, England Date: Fri, 15 Oct 1993 13:32:37 GMT I received a few responses from my request. Firstly, a paper reference. --------------------------- > >David H. Bailey >Twelve ways to fool the masses when giving performance results on parallel >computers >Supercomputer 45, VIII-5, September 1991. Now, a cut down text version of the paper from the Author from Jan 1992. ------------------------------------------------------------------------ >Many of you have read my semi-humorous, semi-serious article "Twelve >Ways to Fool the Masses When Giving Performance Results on Parallel >Computers", which appeared in Supercomputing Review in August, also in >Supercomputer (a European publication) in September. I have attached >a highly condensed version of this piece to the end of this note. >Read one of the above references for the full text. > >1. Quote 32-bit performance results, not 64-bit results, or compare >your 32-bit results with others' 64-bit results. ** > >2. Present inner kernel performance figures as the performance of >the entire application. > >3. Quietly employ assembly code and other low-level language >constructs, or compare your assembly-coded results with others' >Fortran or C implementations. > >4. Scale up the problem size with the number of processors, but don't >clearly disclose this fact. > >5. Quote performance results linearly projected to a full system. ** > >6. Compare your results against scalar, unoptimized code on Crays. > >7. Compare with an old code on an obsolete system. > >8. Base MFLOPS operation counts on the parallel implementation >instead of on the best sequential implementation. > >9. Quote performance in terms of processor utilization, parallel >speedups or MFLOPS per dollar (peak MFLOPS, not sustained). ** > >10. Mutilate the algorithm used in the parallel implementation to >match the architecture. In other words, employ algorithms that are >numerically inefficient, compared to the best known serial or vector >algorithms for this application, in order to exhibit artificially high >MFLOPS rates. > >11. Measure parallel run times on a dedicated system, but >measure conventional run times in a busy environment. > >12. If all else fails, show pretty pictures and animated videos, >and don't talk about performance. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: C.D.Collier@ecs.soton.ac.uk (Christine Collier) Subject: Applications of High Performance Computers dans La Belle France Organization: Electronics and Computer Science, University of Southampton Applications for High Performance Computers Date: November 2nd, 3rd, 4th and 5th, 1993 Applications for High Performance Computing Registration Form Title . . . . . . . . . . . . . . . . . Surname . . . . . . . . . . . . . . . . First Name . . . . . . . . . . . . . . . Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tel: . . . . . . . . . . . . . . . .. Fax: . . . . . . . . . . . . . . . . . I enclose a cheque in the sum of . . . . . . . . . . . . . . . . . . Made Payable to a"University of Southampton". Please forward cheque and registration to Telmat Informatique. Venue: Telmat Informatique Z.1. - 6 Rue de l'industrie, B P 12 68360 Soultz Cedex France Local Accommodation Arrangements contact: Rene Pathenay/Francoise Scheirrer Telmat Informatique Tel: 33 89 765110 Fax: 33 89 742734 Email: pathenay@telmat.fr Day 1 14.00 Start Introduction and Welcome Session 1 Overview Introduction to Parallel Hardware Introduction to Parallel Software Panel Discussion Day 2 Start 09.30 Session 2 Performance Characterization Low-level Benchmarks and Performance Critical Parameters CFD Session 3 Applications I Seismic Modelling Climate Modelling Panel Discussion Day 3 Start 9.30 Session 4 HPC Standards HPF Message-Passing Interface Session 5 Parallel Matrix Kernels Structural Analysis Panel Discussion Day 4 Start 09.00 Session 6 The Parkbench Initiative Grand Challenge Applications Panel Discussion. Close 12.15 Soultz, France Nov 2nd-5th, 1993 The aim of this course is to understand some aspects of current applications of high performance computers. There are three main objectives: 1. To give an overview of parallel hardware and software and to explore the role of performance critical parameters. Matrix kernels are also explored. 2. To give awareness of the tools that are likely to be important in the future. This includes HPF (High performance Fortran) and the message passing standards. 3. To put together applications in diverse areas of science and engineering. There are speakers on seismic modelling, CFD, Structural Analysis, Molecular dynamics and climate modelling. Cost 375 pounds sterling (Full Rate) 275 pounds sterling for academic participants and members of ACT costs include lunch and refreshments throughout the day. Minimum numbers 10 This course cannot be given unless there is a minimum of 10 participants. It will be necessary to receive the your registration no later than Monday 25th October, 1993. Should the course not run, then all registration fees will be returned. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rehmann@cscs.ch (Rene M. Rehmann) Subject: 2nd CFP: IFIP WG10.3 conference on programming environments, Switzerland Reply-To: rehmann@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland 2nd Announcement CALL FOR PAPERS IFIP WG10.3 WORKING CONFERENCE ON PROGRAMMING ENVIRONMENTS FOR MASSIVELY PARALLEL DISTRIBUTED SYSTEMS April 25 - 30, 1994 Monte Verita, Ascona, Switzerland Massively parallel systems with distributed resources will play a very important role for the future of high performance computing. One of the current obstacles of these systems is their difficult programming. The proposed conference will bring together active researchers who are working on ways how to help programmers to exploit the performance potential of massively parallel systems. The working conference will consist of sessions for full and short papers, interleaved with poster and demonstration sessions. The Conference will be held April 25 - 30, 1994 at the Centro Stefano Franscini, located in the hills above Ascona at Lago Maggiore, in the southern part of Switzerland. It is organized by the Swiss Scientific Computing Center CSCS ETH Zurich. The conference is the forthcoming event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) on Programming Environments for Parallel Computing. The conference succeeds the 1992 Edinburgh conference on Programming Environments for Parallel Computing. SUBMISSION OF PAPERS Submission of papers is invited in the following areas: -- Programming models for parallel distributed computing -- Computational models for parallel distributed computing -- Program transformation tools -- Concepts and tools for the design of parallel distributed algorithms -- Reusability in parallel distributed programming -- Concepts and tools for debugging massively parallel systems (100+ processing nodes) -- Concepts and tools for performance monitoring of massively parallel systems (100+ processing nodes) -- Tools for application development on massively parallel systems -- Support for computational scientists: what do they really need ? -- Application libraries (e.g., BLAS, etc.) for parallel distributed systems: what do they really offer ? -- Problem solving environments for parallel distributed programming Authors are invited to submit complete, original, papers reflecting their current research results. All submitted papers will be refereed for quality and originality. The program committee reserves the right to accept a submission as a long, short, or poster presentation paper. Manuscripts should be double spaced, should include an abstract, and should be limited to 5000 words (20 double spaced pages); The contact authors are requested to list e-mail addresses if available. Fax or electronic submissions will not be considered. Please submit 5 copies of the complete paper to the following address: PD Dr. Karsten M. Decker IFIP 94 CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland IMPORTANT DATES Deadline for submission: December 1, 1993 Notification of acceptance: February 1, 1994 Final versions: March 1, 1994 CONFERENCE CHAIR Karsten M. Decker CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8233 fax: +41 (91) 50 6711 e-mail: decker@serd.cscs.ch ORGANIZATION COMMITTEE CHAIR Rene M. Rehmann CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8234 fax: +41 (91) 50 6711 e-mail: rehmann@serd.cscs.ch PROGRAM COMMITTEE Francoise Andre, IRISA, France Thomas Bemmerl, Intel Corporation, Germany Arndt Bode, Technical University Muenchen, Germany Helmar Burkhart, University Basel, Switzerland Lyndon J. Clarke, University of Edinburgh, UK Michel Cosnard, Ecole Normale Superieure de Lyon, France Karsten M. Decker, CSCS-ETH Zurich, Switzerland Thomas Fahringer, University of Vienna, Austria Claude Girault, University P.et M. Curie Paris, France Anthony J. G. Hey, University of Southhampton, UK Roland N. Ibbett, University of Edinburgh, UK Nobuhiko Koike, NEC Corporation, Japan Peter B. Ladkin, University of Stirling, UK Juerg Nievergelt, ETH Zurich, Switzerland Edwin Paalvast, TNO-TPD, The Netherlands Gerard Reijns, Delft University of Technology, The Netherlands Eugen Schenfeld, NEC Research Institute, USA Clemens-August Thole, GMD, Germany Owen Thomas, Meiko, UK Marco Vanneschi, University of Pisa, Italy Francis Wray, Cambridge, UK MONTE VERITA, ASCONA, SWITZERLAND Centro Stefano Franscini, Monte Verita, located in the scenic hills above Ascona, with a beautiful view on Lago Maggiore, has excellent conference and housing facilities for about sixty participants. Monte Verita enjoys a sub-alpine/mediterranean climate with mean temperatures between 15 and 18 C in April. The closest airport to Centro Stefano Franscini is Lugano-Agno which is connected to Zurich, Geneva and Basle and many other cities in Europe by air. Centro Stefano Franscini can also be reached conveniently by train from any of the three major airports in Switzerland to Locarno by a few hours scenic trans-alpine train ride. It can also be reached from Milano in less than three hours. For more information, send email to ifip94@cscs.ch For a PostScript-version of the CFP, anon-ftp to: pobox.cscs.ch:/pub/SeRD/IFIP94/CALL_FOR_PAPERS.ps Karsten M. Decker, Rene M. Rehmann --- Rene Rehmann phone: +41 91 50 82 34 Section for Research and Development (SeRD) fax : +41 91 50 67 11 Swiss Scientific Computing Center CSCS email: rehmann@cscs.ch Via Cantonale, CH-6928 Manno, Switzerland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm From: levy@ellc6.epfl.ch () Subject: Information about PVM Organization: Ecole Polytechnique Federale de Lausanne Hello, I'm interested to find out if in the PVM package there is a possibility to do multicast operations. If the answer is positive, i'll like to know how this multicast groups are constructed and mani- pulated (joining and quiting.) Thanks and best regards. Juan Pablo Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: broegb@jasper.CS.ORST.EDU (Bob Broeg) Subject: Topologies of Parallel Machines Organization: Computer Science Department, Oregon State University Some months ago, an article was posted to this group (I believe) which listed several parallel machines, their underlying topology, and status of the manufacturer (still in business or not). Unfortunately, I did not save the article and now wish I had. Did anyone save this article? And, if you did, would you kindly send it to me? Thanks Bob ------------------------------------------------------------------ Bob Broeg | Department of Computer Science | Internet: broegb@cs.orst.edu Oregon State University | Phone : 503-737-4052 Corvallis, OR 97331-3902 | Fax : 503-737-3014 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: urban@CS.ColoState.EDU (david urban) Subject: Parallel Terrain Rendering Organization: Colorado State University, Computer Science Department I am doing a term/research paper for a parallel programming class on parallel terrain rendering. I have one reference to a paper by Dr. Ken Musgrave. The other references I had got lost when my mail file was deleted by mistake. I would appreciate any help in locating other technical reports or reference material on this subject. Thanks in advance for all the help. David S. Urban -- David S. Urban email : urban@cs.colostate.edu To be the person, you must know the person. To know the person, you must understand the person. To understand the person, you must listen. To listen, you must open your mind and put aside all preconceived ideas and notions. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ravin@eecg.toronto.edu (Govindan Ravindran) Subject: Multiprocessor Simulator Hi, Could someone let me know the ftp sites where I can get execution driven or trace driven shared memory multiprocessor simulator for research use? Thanks. -Ravindran,G. (ravin@eecg.toronto.edu) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: muchen li Subject: Help, Intel Paragon Documents Reply-To: muchen li Organization: University of Notre Dame, Notre Dame Dear Netter: I would appreciate if anyone can give some hints on where to find some documents on Intel Paragon Architecture, Especially on how Paragon support distributed shared memory paradigm as they proclaimed even it is a message-passing type multicomputers. Thanks. -- *-------------------------------------------------------------------* | Michael Muchen Li Office (219) 631-8854 | | Department of Computer Sci. & Engr. Home (219) 232-6713 | | University of Notre Dame | | Notre Dame, IN 46556 Email to: mli@bach.helios.nd.edu| *-------------------------------------------------------------------* Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: schooler@apollo.hp.com (Richard Schooler) Subject: Parallel Tools Survey Organization: HP/Apollo Massachusetts Language Lab I haven't gotten anything in a while, so here goes. My original question was: What tools exists for industrial (non-research) parallel programming? In more detail, if you have experience trying to develop real applications on parallel architectures (shared-memory multiprocessors, massively-parallel machines, workstation clusters, etc.), what is your view on the level of support for such activities? What tools do the hardware vendors supply? Are there third-party solutions? Public-domain software? What are the most useful tools? Any details on why you found particular tools good or bad would be very helpful. Here the (non-commercial) responses: ================================================== I have implemented an eigenvalue solver on 2 different platforms: An nCUBE-2 and a cluster of SUN4 workstations. On the nCUBE-2 I used their "native" environment. They have some additional software from Express for implementing certain communication primitives very efficiently and for performance tunning. They have a debugger that I was not able to figure out. On the cluster I used PVM (versions 2.4 and 3.1). Debugging is a nightmare. There are no tools for performance tunning. I haven't tried Xab yet. >From what I have seen in the literature it is a good debugger. For performance tunning, I have been creating my own trace files in PICL format. At the end of the program those trace files (kept in memory to avoid disturbances) are dumped into a file that can be observed with Paragraph. It helped enormously. Paragraph is an excellent tool. PVM, PICL and Paragraph are public domain tools. There is a committee (MPIF) that is drafting a standard for message passing systems. Such a standard will be extremely useful. Your partners at CONVEX are very active in PVM customization. ================================================== What tools are there for programming multiprocessors? Outside of tools specific to certain scientific applications or database applications my experience is that there is C-threads and maybe C++/threads if you are lucky. If you are really lucky, the C compiler has no bugs, and on miraculous systems the operating system is pretty stable too. Here, when I say C, I mean C and probably Fortran too. My best experience has been on the the Cray machines. Stable OS, stable compilers, good perf. analysis tools. Unfortunately, these machines are hardly ever available in multiprocessor mode. Mostly they are in "partitioned to run lots of batch jobs in parallel"-mode. UniCOS is kind of wierd, and lack of VM makes interactive use awkward. But overall, a very nice system. Next in line is the KSR-1. Less stable compilers, less stable OS, and lousy processor performance. But, not bad overall. No tools other than C-threads and some loop parallelization stuff that probably doesn't work well (never tried it). They have some profiling stuff and very good support for standard UNIX tools, since their OS is standard UNIX. The TMC machines are pitifully bad. They are in a constant revolving beta-release. Nothing ever works and the new release of the OS or compilers always breaks all existing code. Don't know about Intel machines, but I can't imagine that they'd be much better than TMC. The Maspar is a nice environment on a bad architecture. The processors are just too small to be useful on big problems. They have a nice parallel C, good profiling and debugging tools and a pretty stable OS environment. I don't know what's around for MP workstations. I think that's where the interesting work is to be done, because the machines have a stable base to work from...good tools should be coming along. ================================================== A good starting point might be the survey report by Louis Turcotte: /parallel/reports/misc/soft-env-net-report.tar.Z Report: "A Survey of Software Environments for Exploiting Networked Computing Resources" by Louis Turcotte covering over 60 distributed and other networked software environments (150 pages). Announcement of report contains author contact details. Available via anonymous ftp from unix.hensa.ac.uk ================================================== Greetings! Sometime back I came to know about a tool MENTAT which they call it Object Oriented Programming tool for Parallel Programming. Also I found out some people are using it in the industry to develop applications in FEA in a workstation cluster. MENTAT is a Public domain tool developed and available in Univ. of Virginia. I never used it and don't know how good it is. But if you want to know more, it is available in the following location through anonymous ftp. uvacs.cs.virginia.edu /pub/mentat -> the software is here /pub/techreports -> contains couple of reports CS91-07, CS91-31 and 32 which will give you more details. There are not many tools to develop software in parallel computers. I am a graduate student and I did lot of programmimg in CM -5 both using C* and CMMD message passing libraries. There is PRISM in CM-5 which helps in debugging and profiling.But I found it useful in C* and it is not good while using CMMD.But my feeling is there are no good tools to program in CM-5. Still many of these tool developments are in the research project level as sometime back I saw a project description in Univ. of Wisconsin for developing tools for CM-5 and Intel Paragon. ================================================== -- Richard schooler@apollo.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: C.PETERSON@AppleLink.Apple.COM (Springer Verlag Publishers,PAS) Subject: Visualization Book Release FOR IMMEDIATE RELEASE Contact: TELOS, The Electronic Library of Science (408) 249-9314 FIRST MULTIMEDIA BOOK ON SCIENTIFIC VISUALIZATION PUBLISHED BY TELOS Santa Clara, California, October 15, 1993 - TELOS (The Electronic Library of Science) announces the release of "Visualization of Natural Phenomena," the first multimedia book/CD-ROM package to achieve a high level of technology and information integration for the scientific community, by Robert S. Wolff and Larry Yaeger of Apple Computer. "Visualization of Natural Phenomena" is an integrated book/CD-ROM package which allows users to interactively explore the techniques used in scientific visualization. It is designed for a broad range of computer professionals, scientists, researchers, teachers, students, and other interested readers. Employing the image as the fundamental concept, the book covers a wide range of subjects under the broad heading of scientific visualization, including: image display and processing; image animation; video; visualization of multiparameter data; terrain rendering; 3-D solid modeling, rendering and animation; and visualization in film and TV. Practical hints on the use of commercial and public domain software in producing scientific visualization are also provided, as are discussions of the computation and production of the images used in the text. This book contains over 300 full-color images and black and white illustrations. Technical Notes contain additional technical and algorithmic discussions of the material. There is also a Special Appendix describing in detail how the book and CD-ROM were produced, and a Glossary of terms is provided in the book and on the disc. Priced at $59.95, Visualization of Natural Phenomena (ISBN 0-387-97809-7) comes with a CD-ROM containing more than 100 QuickTime(tm) animations covering a wide range of visualization applications, along with explanations of the various phenomena depicted. The CD also contains public domain images, as well as Mathematica(r), NCSA, Spyglass(r), and other third-party software to supply users with a broad range of visualization examples. The CD-ROM is intended for use on Apple Macintosh equipment, and is integrated with the book through the use of an icon library for easy cross-referencing. VNP is structured so users do not have to access the CD-ROM in order to take advantage of the book's content. However, it is highly recommended that users do refer to the book and disc in tandem when working through the materials in order to fully benefit from the interactive learning experience this package provides. The recommended system requirements are: Color Macintosh(r), System 7.01 or 7.1, 8 MB RAM, 13" RGB monitor, 5 MB free hard disk space, CD-ROM drive, QuickTime 1.5 extension (included on CD-ROM) needed to play movies. ABOUT THE AUTHORS Robert S. Wolff is the Project Leader of Advanced Applications in Apple Computer's Advanced Technology Group, where, since 1988, he has specialized in developing prototype environments for scientific computing. Before coming to Apple, he was a planetary astrophysicist at NASA's Jet Propulsion Lab (JPL). He has produced numerous visualizations and animations and has participated in several courses and panels on visualization at SIGGRAPH. He is currently Visualization Editor for Computers in Physics and is a Co-Investigator on the Volcanology Team on NASA's Earth Observing Systems (EOS) Mission. Dr. Wolff has a Ph.D. in astrophysics from Brandeis University. Larry Yaeger's background includes computational fluid dynamics, computer graphics imaging, and neural network research. He has carried out pioneering simulations of fluid flows over the Space Shuttle and was one of the principal architects of a computer graphics rendering software at Digital Productions. Larry has contributed to the design and development of the software tools and production techniques used for special effects in several films and commercials. As a Principal Engineer in Apple's Vivarium Program, he built neural network simulators and built a system for integrating Macintosh graphics into routine film production for Star Trek: The Next Generation. Now as part of the Adaptive Systems/Advanced Technology Group at Apple Computer, Inc. he is extending his character recognition work to pen-based microprocessors and is currently combining computer graphics, neural networks, and genetic algorithms to study artificial life and artificial intelligence. ABOUT TELOS TELOS is an imprint of Springer-Verlag New York, with publishing facilities at 3600 Pruneridge Avenue, Suite 200, Santa Clara, Calif. 95051. Its publishing program encompasses the natural and physical sciences, computer science, economics, mathematics, and engineering. TELOS strives to wed the traditional print medium with the emerging electronic media to provide the reader with a truly interactive multimedia information environment. All TELOS publications delivered on paper come with an associated electronic component. To order this product, please visit your local bookstore or contact Springer-Verlag at (800) 777-4643 (in New Jersey, call (201) 348-4033). Fax orders to (201) 348-4505. International customers should contact their nearest Springer-Verlag office. For information on bulk sales, please contact Mark Puma at (212) 460-1675. For information on 30-day examination copies for course adoption, please call (201) 348-4033 ext. 660. For book review copies, please contact TELOS directly at (408) 249-9314. -30- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kasyapa balemarthy Subject: Request for info on iWarp/Warp Reply-To: kasyapa balemarthy Organization: Univ. of Notre Dame Hi Netters, I need addresses of ftp sites/electronic journals about the iWarp/Warp processor array developed at Carnegie-Mellon University. Thanks in advance, Kasyapa Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cosc19ub@menudo.uh.edu (Sam S. Li) Newsgroups: comp.parallel Subject: topological properties for various internetworking topologies Organization: University of Houston Hello netters, I am writing a survey report for comparing topological properties for different internetworking topologies, e.g. star, shuffle, hypercube, etc.. Any pointers or references will be appreciated. Sam S. Li cosc19ub@menudo.uh.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: Re: Parallel Fourier transforms From: hcole@sagebrush.nrcabq.com (Howard R. Cole) References: <1993Oct11.154829.27087@hubcap.clemson.edu> I also am looking for an algorithm for a parallel 2D fourier transform. If anyone knows about such an algorithm, please let me know. Thanks. - Howard Cole Nichols Research Corp. | hcole@tumbleweed.nrcabq.com 2201 Buena Vista SE | Suite 203 | "If we can't fix it - Albuquerque, NM 87106 | it just aint' broke!" Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: manu@CS.UCLA.EDU (Maneesh Dhagat) Subject: Parallel Implementations of SETL? Organization: UCLA, Computer Science Department Are there any parallel implementations of SETL? I know that Paralation Lisp, CM Lisp, SQL, APL, etc. come in a similar class of (collection-oriented) languages, but I want to specifically know if SETL has been implemented on parallel machines. If you can email me any references/ftp sites, it would be much appreciated. --Maneesh Dhagat -- --------------------------------------------------- Maneesh Dhagat (manu@cs.ucla.edu) University of California, Los Angeles, CA 90024 --------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: gross@noether.ucsc.edu (Mike Gross) Newsgroups: comp.parallel,sci.math.num-analysis Subject: Re: Parallel Fourier transforms Organization: University of California, Santa Cruz In comp.parallel I wrote: >I need to solve Poisson's equation on an i860 supercomputer with local >memory only. I would like to use Fourier transform methods to solve >the equation, but it is not obvious to me how to perform a global operation >such as a Fourier integral (or FFT) efficiently on data that must be fragmented >across several processors. In order to get the dynamic range I need in my >simulation, I require a space-domain mesh that is several times the size of >local memory. >Does anyone out there know of any good references for this problem? Or better >yet, are there any publicly available routines? My problems sounds like one >that has been attacked many times by parallel numerical analysts. I hope this >isn't a FAQ. I may be in a position to answer my own question, with a little help from Michel Beland (beland@cerca.umontreal.ca), who provided a name from memory. I did a citation search on that name, and also on the title words "parallel" and "fourier." What came up was the following two articles: Ganagi & Neelakantan Implementation of the Fast Fourier Transform Algorithm on a Parallel Processor Current Science, 61(2), 105-108. Tong & Swarztrauber Ordered Fast Fourier Transforms on a Massively Parallel Hypercube Multiprocessor. Journal of Parallel & Distributed Computing 12(1), 50-59. Both articles are from 1991. There is also a more recent reference, which I haven't been able to decipher, from the folks at IBM Watson: Christidis & Pattnaik Parallel Algorithm for the Fast Fourier Transform. EWECF Conference 13(4), 533-538. Mike Gross Physics Board Univ of California GO SLUGS!!!!! Santa Cruz, CA 95064 gross@physics.ucsc.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rarubio@nmsu.edu (Rafael M. Rubio) Subject: Intel iPSC Hypercube performance? Date: 17 Oct 1993 06:28:36 GMT Organization: New Mexico State University, Las Cruces, NM Nntp-Posting-Host: dante.nmsu.edu We here at NMSU have recently come across 2 Intel 286 based iPSC Hypercubes (config. d5) totaling 64 processors and would like to know how the math performace of a set up like this compares to that of a Sun 4/670. Our ideal application would be ray tracing images. Thanks for any and all info. Rafael Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: David Kanecki Subject: Simulation for Emergency Management CFP As chairperson of the Emergency Management Committee for The Socie- ty for Computer Simulation, I would like to ask for your help in inform- ing others about the Emergency Management Conference and to submit a paper or abstract to the conference. Papers and abstracts can be submit- ted until late December. To submit a paper or abstract, please sent it to: Simulation for Emergency Management c/o SMC '94 P.O. Box 17900 San Diego, CA 92177 Phone: (619)-277-3888, Fax: (619)-277-3930 Currently, we have received four papers from colleagues in the industrial, government, and academic areas. Also, if you would like to volunteer, please contact me, kanecki@cs.uwp.edu, or SCS in San Diego. Other conferences that are being held during the 1994 SCS Simula- tion Multiconference, SMC '94, April 11-15, 1994, Hyatt Regency Aventine - La Jolla, San Diego, California are Simulators International; High Performance Computing Symposium; Visualization, Validation, & Verifica- tion of Computer Simulations; Mission Earth; Military, Government, and Aerospace Simulation; and 27th Annual Simulation Symposium. To show the diversity of topics and interest of Emergency Manage- ment, I have compiled a list of authors and titles of papers from the 1993 and 1992 conferences: 1993 Conference 1. "A Report Card on the Effectiveness of Emergency Management and Engineering", James D. Sullivan, CCP, CDP, CSP. 2. "A New Cooperative Program for the Development of Advanced Technology for Emergency Preparedness", Robert J. Crowley, P.E. 3. "Simulation in Training and Exercises for Emergency Response", Lois Clack McCoy. 4. "Fatal Hazardous Materials and Accident Statistics", Theodore S. Glickman, Dominic Golding, Karen S. Terry, Frederick W. Talcott. 5. "A Risk Analytic Approach to Contingency Planning using Expert Judgement in Simulated Scenarios", John R. Harrald, Thomas Mazzuchi. 6. "Emergency Response and Operational Risk Management", Giampiero E.G. Beroggi, William A. Wallace. 7. "Damage Analysis of Water Distribution Systems using GIS", Matthew J. Cassaro, Sridhar Kamojjala, N.R. Bhaskar, R.K. Ragade, M.A. Cassaro. 8. "Physical Damage and Human Loss: Simulation of the Economic Impact of Earthquake Mitigation Measures", Frederick Krimgold, Jayant Khadilkar, Robert Kilcup. 9. "Utah Equip: A Comprehensive Earthquake Loss Prediction Model for the Wasatch Fault", Robert Wilson, Christopher Rojahn, Dr. Roger Scholl, Barbara Skiffington, Terry Cocozza. 10. "Geographic Information System (GIS) Application in Emergency Management", Donald E. Newsom, Ph.D., P.E., Jacques E. Mitrani. 11. "Smart Spatial Information Systems and Disaster Management: GIS in the Space Age", A.M.G. Jarman, Ph.D. 12. "An Evacuation Simulation for Underground Mining", Richard L. Unger, Audrey F. Glowacki, Robert R. Stein. 13. "Importance of Rems in the Aftermath of Hurricane Andrew", Suleyman Tufekci, Sandesh J. Jagdev, Abdulatef Albirsairi. 14. "Optimal Routing in State Dependent Evacuation Networks", David L. Bakuli, J. MacGregor Smith. 15. "Evacuation Models and Objectives", Gunnar G. Lovas, Jo Wik- lund, K. Harrald Drager. 16. "Xpent, Slope, Stability Expert System for Managing the Risk", R.M. Faure, Ph.D., D. Mascarelli, Ph.D. 17. "Mapping of Forest Units which have a Protective Function against Natural Hazards. An Application of Geographical Information Systems in France", Frederic Berger. 18. "Artificial Intelligence and Local Avalanche Forecasting: The System 'AVALOG' ", Robert Belognesi. 19. "Simulations in Debris Flow", Fabrice Moutte. 20. "Planning and Controlling of General Repair in a Nuclear Power Plant", Majdandzic N. and Dobrila Damjonovic-Zivic. 21. "Spatial Decision Support Systems for Emergency Planning: An Operational Research/ Geographical Information Systems Approach to Evacuation Planning", F. Nishakumari de Silva, Michael Pidd, Roger Eglese. 22. "Online Expert Systems for Monitoring Nuclear Power Plant Accidents", M. Parker, F. Niziolek, J. Brittin. 23. "Nuclear Power Reactor Accident Monitoring", M. Parker, P.E. 24. "An Expert System for Monitoring the Zion Nuclear Power Station the DNS Early Warning Program", Joseph L. Brittin, Frank Niziolek. 25. "Fire Spread Computer Simulation of Urban Conflagagrations", P. Bryant, G.R. Doenges, W.B. Samuels, S.B. Martin, A.B. Willoughby. 26. "Practical Applications of Virtual Reality to Firefighter Training", Randall Egsegian, Ken Pittman, Ken Farmer, Rick Zobel. 27. "Difficulties in the Simulation of Wildfires", James H. Brad- ley, A. Ben Clymer. 28. " 'Snow and Computer' A Survey of Applications for Snow Hazards Protection in France", Laurent Buisson, Gilles Borrel. 29. "Mem-brain, Decision Support Integration-Platform for Major Emergency Management (MEM)", Yaron Shavit. 30. "Architecture of a Decision Support System for Forest Fire Prevention and Fighting", Jean-Luc Wybo, Erick Meunier. 31. "Optimizing Comprehensive Emergency Mitigation and Response through the use of Automation (Panel Discussion)", Lois Clark McCoy, David McMillion. 32. "Applying a Geographical Information System to Disaster Epide- miologic Research: Hurricane Andrew, Florida 1992", Josephine Malilay, Lynn Quenemoen. 33. "An Effective Method of Extracting a Localized Storm History from a Database of Tracks", Eric C. Dutton, Ronald S. Reagan. 34. "Periodic Poisson Process for Hurricane Disaster Contingency Planning", Ronald S. Reagan. 35. "Estimation, Optimization, and Control in Rural Emergency Medical Service (EMS) Systems", Cecil D. Burge, Ph.D., P.E. 36. "Visions for a Networked System of Emergency Vehicle Training Simulators", Gregory J. Bookout. 37. "The FEMA ROCS Model", Henry S. Liers, Ph.D. 38. "Computers bring Crisis Simulations to Life using Computer Maps, Graphics, and Databases to Re-Define and Maximize the Effective- ness of "Tabletop" Crisis Simulations", James W. Morentz, Ph.D., Lois Clark McCoy, Joseph Appelbaum, David Griffith. 39. "The Use of Computer Simulations for Consequence Analysis of Toxic Chemical Releases", E.D. Chikhliwala, M. Oliver, S. Kothandarman. 40. "Didactic Simulation (Syndicate Exercise) for Disaster Manage- ment", Dr. S. Ramini. 41. "Its Just one Damn' Crisis After Another...", S.F. Blinkhorn, M.A., Ph.D. 42. "AEDR, American Engineers for Disaster Relief Database, Spread- sheet and Wordprocessing Applications", James S. Cohen. 43. "Geophysical Uncertainties Affecting Emergencies", James H. Bradley, Ph.D. 44. "Consultation on Simulation for Emergency Preparedness (COSEP) User/Developer Roundtable Discussion Session", Lois Clark McCoy, Donald E. Newsom, Jacques Mitrani. 1992 Proceedings 1. "Are We Winning the War Against Emergencies", James D. Sullivan, CCP, CDP, CSP. 2. "Simulation of Suburban Area Fires", A. Ben Clymer. 3. "Expertgraph: Knowledge Based Analysis and Real Time Monitoring of Spatial Data Application to Forest Fire Prevention in French Riviera", Jean Luc Wybo. 4. "Simulation in Support of The Chemical Stockpile Emergency Preparedness Program (CSEPP)", Robert T. Jaske, P.E., Madhu Beriwal. 5. "Modeling Protective Action Decisions for Chemical Weapons Accidents", John H. Sorensen, George O. Rogers, Michael J. Meador. 6. "Simulation Meets Reality - Chemical Hazard Models in Real World Use", Donald E. Newsom, Ph.D., P.E. 7. "Managing the Risk of a Large Marine Chemical Spill", Marc B. Wilson, John R. Harrald. 8. "Simclone - A Simulated Cyclone - Some User Experiences and Problems", Dr. S. Ramani. 9. "Simulation of Coastal Flooding Caused by Hurricanes and Winter Storms", Y.J. Tsai. 10. "Simulation of Environmental Hazards in a Geographic Informa- tion System: A Transboundary Urban Example from the Texas/Mexico Border- lands", Thomas M. Woodfin. 11. "Simulation and Protection in Avalanche Control", Laurent Buisson. 12. "Natural Disasters, Space Technology, and the Development of Expert Systems: Some Recent Developments in Australian National River Basin Planning and Management", A.M.G. Jarman, Ph.D. 13. "Characterizing Mine Emergency Response Skills: A Team Approach to Knowledge Acquisition", Launa Mallet, Charles Vaught. 14. "Using Simulation to Prepare for Emergency Mine Fire Evacua- tion", Audrey F. Glowacki. 15. "Dymod: Towards Real Time, Dynamic Traffic Routing During Mass Evacuations", Frank Southworth, Bruce N. Janson, Mohan M. Venigalla. 16. "A Tutorial on Modeling Emergency Evacuation", Thomas Kisko, Suleyman Tufekci. 17. "REMS: A Regional Evacuation Decision Support System", Thomas Kisko, Suleyman Tufekci. 18. "EVACSIM: A Comprehensive Evacuation Simulation Tool", K. Harrald Drager, Gunnar Lovas, Jo Wiklind, Helge Soma, Duc Duong, Anne Violas, Veronique Laneres. 19. "The Prediction of Time-Dependent Population Distributions", George Banz. 20. "Mulit-Objective Routing in Stochastic Evacuation Networks", J. MacGregor Smith. 21. "Earthquake Impact Projections Expert System Application", Barbara Skiffington, Robert Wilson. 22. "An Economic Profile of a Regional Economy Based on an Implan Derived Database", E. Lawrence Salkin. 23. "Energy Corrected Simulation Accelerograms for Non-Linear Structures", Darush Davani, Michael P. Gaus. 24. "Stimulating the Planning Progress Through Computer Simulation", Salvatore Belardo, John R. Harald. 25. "Simulation Methods in Utility Level Nuclear Power Plant Emer- gency Exercises", Klaus Sjoblom. 26. "Computer Simulation of Industrial Base Capacity to Meet Na- tional Security Requirements", Mile Austin. 27. "The FEMA Emergency Management Assessment System", Robert Wilson. 28. "The Missing Data Base: Under-Automation in Disaster Response & Planning", Lois Clark McCoy. 29. "The Environmental Education: Need for All", M. Abdul Majeed. === Call For Papers === SIMULATION FOR EMERGENCY MANAGEMENT Sponsored by The Society for Computer Simulation, SCS April 11-15, 1994 La Jolla - California Part of the SCS 1994 Simulation Multiconference A special topic area of the SMC '94, sponsored by the Emergency Manage- ment Engineering Technical Activity Committee of the SCS brings users, planners, researchers, managers, technicians, response personnel, and other interested parties to learn, teach, present, share, and exchange ideas and information about how, when, where, and why computer simula- tion and related tools can be used to avoid, mitigate, and recover from disasters and other emergencies. Topics Natural Disasters Hurricanes and Tornadoes, Floods, Earthquakes, Volcanic Activity, Outdoor fires, snow and debris avalanches. Man-made Disasters Oil, Chemical and Nuclear spills, Nuclear and Chemical plant acci- dents, building fires, Communication systems failures, Utility failures. Techniques Training and Simulators, AI and Expert systems, Global information systems, Risk Analysis, Operations Research, Simulation, Effectiveness analysis, Cost and Damage analysis. Specific Applications Evacuation, Research on Emergency Management or Engineering, Emer- gency Control Search and Rescue. Presentations, demonstrations and exhibits concerning any and all areas of simulation and modeling (as well as related technologies) including safety, emergency management and planning, forensic technology, design, response, user experience and problems and case studies are appropriate to be presented. Papers or abstracts can be submitted until late December to: Simulation for Emergency Management c/o SMC '94 P.O. Box 17900 San Diego, CA 92177 Phone (619)-277-3888, Fax (619)-277-3930 Other Conferences and activities being held as part of SMC '94 Simulators International, High Performance Computing Symposium, Visualization, Validation & Verification of Computer Simulation, Mission Earth, Military, Government, and Aerospace Simulation, 27th Annual Simulation Symposium, Professional Development Seminars, and Exhibits. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ferreira@cri.ens-lyon.fr (Afonso Ferreira) Subject: CFP: Int. Conf. on Parallel Algorithms Organization: Ecole Normale Superieure de Lyon References: <93-10-065@comp.compilers> ***************************************************************** * Final Announcement and Call for Papers * * * * CANADA-FRANCE CONFERENCE ON PARALLEL COMPUTING * * * * Montreal - May 18-20, 1994 * ***************************************************************** This conference will provide an international forum for researchers investigating algorithmic and structural aspects of distributed memory parallel computing. The conference will be held at Concordia University in downtown Montreal the week before the 1994 ACM Symposium on Theory of Computing. Authors are invited to submit papers describing original, unpublished research results in the following areas of parallel computing. -- Communication in interconnection networks -- Discrete algorithms -- Embeddings and mappings -- Geometric algorithms -- Algorithms for unstructured problems -- Data structures Invited Speakers ---------------- Ravi Baliga (CERCA, Montreal) Jean-Claude Bermond (CNRS, I3S, Nice) Frank Dehne (Carleton U., Ottawa) Nicholas Pippenger (UBC, Vancouver) Dominique Sotteau (CNRS, LRI, Paris) Program Committee ----------------- Selim Akl (Queen's U., Kingston), Michel Cosnard (LIP - ENS Lyon), Pierre Fraigniaud (LIP - ENS Lyon), Marie-Claude Heydemann (U. de Paris-Sud), Arthur Liestman (Simon Fraser U., Vancouver), Prakash Panangaden (McGill U., Montreal), Ajit Singh (U. of Waterloo), Ivan Stojmenovic (U. of Ottawa), Denis Trystram (LMC, Grenoble), Alan Wagner (UBC, Vancouver). Organizing Committee -------------------- Afonso Ferreira (CNRS, LIP, ENS Lyon), Gena Hahn (U. de Montreal), Jaroslav Opatrny (Concordia U., Montreal), Joseph Peters (Simon Fraser U., Vancouver), Vincent Van Dongen (CRIM, Montreal). Instructions for Authors ------------------------ All submisssions will be refereed and all accepted papers will be published in the conference proceedings. An author of each accepted paper is required to attend the conference and to present the paper. Papers should not exceed 15 pages and should include an abstract and several descriptive keywords. Authors should send 5 copies of their manuscripts to the Program Committee Chair: Michel Cosnard Laboratoire de l'Informatique du Parallelisme Ecole Normale Superieure de Lyon 46, Allee d'Italie 69364 Lyon Cedex 07 France Dates ----- October 31, 1993: Deadline for submission of papers January 31, 1994: Notification of acceptance February 28, 1994: Deadline for camera-ready copy Partners -------- Centre Jacques Cartier Concordia University NSERC (tentative) PRC C3 of the French CNRS Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jan@cs.ruu.nl (Jan van Leeuwen) Subject: ESA'94 European Symposium on Algorithms Organization: Utrecht University, Dept. of Computer Science CALL FOR PAPERS 2nd Annual EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA'94) SEPTEMBER 26-28, 1994, UTRECHT - THE NETHERLANDS In the new series of annual international symposia on algorithms, the Second Annual European Symposium on Algorithms (ESA'94) will be held near Utrecht, The Netherlands, September 26-28, 1994. The European Symposium on Algorithms covers ALL RESEARCH ON ALGORITHMS such as it is carried out in the fields of (THEORETICAL) COMPUTER SCIENCE, DISCRETE APPLIED MATHEMATICS and all OTHER AREAS of algorithm-oriented research AND ITS APPLICATION. The Symposium aims at intensifying the exchange of information on new research directions and at the presentation of recent research results and their utilization in the field. SCOPE: the Symposium covers ALL RESEARCH IN THE FIELD OF SEQUENTIAL, PARALLEL AND DISTRIBUTED ALGORITHMS and its application. Papers are solicited describing original results in ALL FIELDS OF ALGORITHMS RESEARCH, both in general and in specific areas such as: graph- and network problems, computational geometry, algebraic problems and symbolic computation, pattern matching, combinatorial optimization, neural and genetic computing, cryptography, and so on. ESA also solicits papers describing original results in ALGORITHMS RESEARCH APPLIED TO CONCRETE PROBLEMS IN SCIENCE AND INDUSTRY or dealing with new (algorithmic) issues arising in the implementation of algorithms in real-world problems. AUTHORS are invited to submit 12 copies of an extended abstract or full draft paper of at most 12 pages before MARCH 25, 1994 to the chairman of the program committee: Professor Jan van Leeuwen Dept of Computer Science Utrecht University Padualaan 14 3584 CH UTRECHT THE NETHERLANDS (email : jan@cs.ruu.nl) Submissions should also include the fax number and email address of the sender. Notification of acceptance follows by May 20, 1994. Accepted papers will be published in the proceedings of the Symposium, which will appear in the series Lecture Notes in Computer Science of Springer-Verlag. Camera-ready copy is due before June 20, 1994. All participants receive a copy of the proceedings as part of their registration. PROGRAM COMMITTEE Helmut Alt (Berlin) Thomas Lengauer (Bonn) Giuseppe Di Battista (Rome) Jan Karel Lenstra (Eindhoven) Philippe Flajolet (Paris) Andrzej Lingas (Lund) Alon Itai (Haifa) Bill McColl (Oxford) Lefteris Kirousis (Patras) Friedhelm Meyer auf der Heide (Paderborn) Jan van Leeuwen (Utrecht, chair) Wojciech Rytter (Warsaw) ORGANIZING COMMITTEE Hans Bodlaender, Marko de Groot, Jan van Leeuwen, Margje Punt, and Marinus Veldhorst (Utrecht University). LOCATION: the Symposium will be held in the conference facilities of the `National Sports Centre Papendal' of the Dutch National Sports Federation. The conference centre is located in a beautifully forested area near the city of Arnhem, some 30 miles east of Utrecht. It features excellent facilities for recreational sports (golf, swimming, tennis, and so on) and very pleasant surroundings. For all further information on ESA'94 please contact : Marko de Groot Dept of Computer Science Utrecht University phone: +31-30-534095 / +31-30-531454 fax: +31-30-513791 email: marko@cs.ruu.nl CALL FOR PAPERS CALL FOR PAPERS CALL FOR PAPERS CALL FOR PAPERS =============================================================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.ai.genetic,comp.robotics,comp.parallel From: Pierre.Bessiere@imag.fr (pierre bessiere ) Subject: TR: Genetic Algorithms for robotics (in French) Organization: Institut Imag, Grenoble, France The following Technical Report is available on FTP at ftp.imag.fr ***CAUTION: The report is witten in FRENCH*** ******************************************************************** TITLE :Algorithmes genetiques paralleles pour la planification de trajectoires de robots en environnement dynamique. AUTHOR(S) :Thierry Chatroux REFERENCE :Memoire d'ingenieur CNAM (Conservatoire National des Arts et metiers); Decembre 1993. LANGUAGE :French LENGTH :98 pages DATE :15/10/93 KEYWORDS :Path planning, massively parallel computers, genetic algorithms, geometric modeling, dynamic environment. FILE NAME :chatroux.cnam93.f.ps.Z Author E-mail :chatroux@imag.fr bessiere@imag.fr, muntean@imag.fr, mazer@lifia.imag.fr Related Files :bessiere.iros93.e.ps.Z ABSTRACT : Notre travail porte sur l'implantation parallele d'algorithmes de calcul de trajectoires en robotique. L'objectif est d'atteindre les performances requises pour controler les deplacements d'un robot dans un environnement encombre d'obstacles dynamiques. Dans ce memoire nous decrivons l'implantation massivement parallele et l'experimentation d'un planificateur de trajectoire pour un bras manipulateur a 6 degres de liberte. Pour realiser ce planificateur nous avons utilise une technique d'optimisation basee sur les algorithmes genetiques paralleles, couplee a une technique de planification de trajectoire originale l'algorithme fil d'Ariane, constitue de deux algorithmes Search et Explore. Notre methode a ete mis en oeuvre dans PARX, noyau du systeme d'exploitation pour machine massivement paralleles PAROS et implante sur une machine Supernode a 128 Transputers. PARX et l'architecture des machines Supernode ont ete concus et developpes au sein de l'equipe SYstemes Massivement PAralleles au Laboratoire de Genie Informatique de Grenoble dans le cadre des projets ESPRIT SUPERNODE I et II. ******************************************************************** COMMENT RECUPERER UN FICHIER SUR ftp.imag.fr ? ______________________________________________ Pour recuperer un fichier xxx.yyy.e.ps.Z : Anonymous ftp on: - ftp.imag.fr ou - 129.88.32.1 yourmachine>ftp ftp.imag.fr Name: anonymous Password: yourname@youradress ftp>cd pub/LIFIA ftp>binary %CETTE LIGNE EST IMPERATIVE! car les fichiers sont compresses% ftp>get xxx.yyy.e.ps.Z ftp>quit yourmachine>uncompress xxx.yyy.e.ps.Z yourmachine>lpr -ls xxx.yyy.e.ps -- Pierre BESSIERE *************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: FAQ for biblios Who What Where When Why How --- ---- ----- ---- --- --- Who --- Eugene Miya has committed to maintaining a comprehensive biblio on parallelism for ten years (Ref.). What ---- The foundations of the biblio began with several published bibliographies on parallelism dating back to 1980. This bibliography corrects errors found in those. The bibliography is free, but because it was based on previously published material, the terms of a copyright with Prentice-Hall asks that we record all sites using it. A hardcopy letterhead of request is all it takes to get access. The format of the bibliography is straight ASCII and Unix refer for use with document processing systems. Converters for bibtex and Scribe (and script) exist. Some of the text contains significant annotation. Sources: various published source, unsolicited letters or email, net sources (comp.parallel, comp.os.research, comp.research.japan, comp.doc.techreports, and others). Can't cover everything. It is important that you the reader also contribute. The mass of information, a lack of continued funding, and other problems are needed to help keep the community informed. It is used by many people: schools, companies, government agencies, institutions. Where ----- We need a letterhead to be able to tell you where. Sorry. Blame Prentice-Hall. Miya was only trying to stand on the shoulders of giants (if not the toes of his colleagues). Why --- Paper has real problems. If you go back to the 1980s biblios, the referees couldn't do a good job. Software is far more flexible. Copyright is another problem. What makes the biblio special are annotations, comments, keywords by the readership and others (many anonymous: "This stinks," others initialed (XYZ), or "signed." Many institutions already have it. You only need locate the point of contact. The current size is 6 MBs. It is also helpful if you as a student of parallel processing NOT simply post rehashes of his references (he can tell, believe me). It just increases the catch up work. Instead, separate Miya references from non-Miya references, so that he can add incorporate them (and reformat if necessary) long with other references. Thinking of this like a recording secretary except the combined knowledge of the parallelism experts on the net. Upside: It tries to be comprehensive. It provides better coverage and collected commentary (in some cases) than bibliographic services. It tries to cut an honest deck. Downside: It is large. It is not as always as up to date as the maintainer would like, but he and the community tries. You can help, too. Reference (this also shows what refer looks like [See Mike Lesk's paper on inverted indices in the Unix manual): %A E. N. Miya %T Multiprocessor/Distributed Processing Bibliography %J Computer Architecture News %I ACM SIGARCH %V 13 %N 1 %D March 1985 %P 27-29 %K Annotated bibliography, computer system architecture, multicomputers, multiprocessor software, networks, operating systems, parallel processing, parallel algorithms, programming languages, supercomputers, vector processing, cellular automata, fault-tolerant computers, some digital optical computing, some neural networks, simulated annealing, concurrent, communications, interconnection, %X Notice of this work. Itself. Quality: no comment. Also short note published in NASA Tech Briefs vol. 12, no. 2, Feb. 1988, pp. 62. Also referenced in Hennessy & Patterson pages 589-590. About an earlier unmaintained version. TM-86000 and ARC-11568. Maintaining for ten years with constant updates (trying to be complete but not succeeding). Limited verification against bibliographic systems (this is better than DIALOG). Storing comments from colleagues (DIALOG can't do this.) Rehash sections on a Sequent as a test of parallel search (this work exhibits unitary speed-up). 8^). The attempt is to collect respected comments as well as references. Yearly net posting results hopefully updated "grequired" and "grecommended" search fields. Attempted to be comprehensive up to 1989. $Revision:$ $Date:$ The standard email request letter follows: The parallel/distributed processing bibliography (in machine readable form) is documented in ACM CAN: %A E. N. Miya %T Multiprocessor/Distributed Processing Bibliography %J Computer Architecture News %I ACM SIGARCH %V 13 %N 1 %D March 1985 %P 27-29 It began with a bibliography published in 1980 by %A M. Satyanarayanan %T Multiprocessing: an annotated bibliography %J Computer %V 13 %N 5 %D May 1980 %P 101-116 %X Excellent reference source, but dated. Text reproduced with the permission of Prentice-Hall \(co 1980. $Revision: 1.2 $ $Date: 84/07/05 16:58:56 $ My work is considerably larger (about 100 times). In order to obtain a copy on the Internet, I am required to ask for a letterhead from an institution stating that they understand portions are copywritten. It's free, so that is not much to ask. Please also send any corrections, typos, additions to me. Annotations and keywords are particularly encouraged, since I can't read everything. Citation in any of your published work is appreciated since this supports my work. Send letterhead to: E. Miya MS 258-5 NASA Ames Research Center Moffett Field, CA 94035 Please include your return Email address. I maintain copies on some sites, your site may have one already. Check with your site admin. The usual place is kept controlled to abide by terms of the copyright. I try to keep one point of contact to keep things simple. A list follows. If you are not on the Internet, you can obtain an older version (with source files) from COSMIC Univ. of Georgia 382 East Broad St. Athens, GA 30602 It's ASCII/refer (that's the format above) bibliographic, tar/Unix tape format. There is a tape handling charge. Special requests: IBM format tapes, VMS BACKUP format are also possible, ask me, not COSMIC for these. Tape distribution is now restricted to North America, but I am trying to get world-wide distribution again. Details: Why refer? ( 1) human readable ASCII, not a binary format, 2) easily convertable to other formats (EBCDIC), 3) at the time bibtex didn't exist, 4) less overhead than bibtex (fields smaller, however I decided in favor of full names for journals rather than abbrev. because many users don't know what ICPP or IDCS stand for...., 5) not only can you search it, but you can use it with a filter or formatter like troff with reasonable reformatting results, stylistic considerations like whether author names should have initials or full names can be automated. Contact points (not every Dept. is a CS dept). NASA: me ICASE: me (Nancy Shoemaker, Bob Voigt) LLNL/MPCI/CRG/NERSC: me UCSC: me (Darrell Long) UC Berkeley: me (Eric Allman, formerly Mike Kupfer) AMT: Rex Thanakij Aerospace Corp: Anne Finestone Amdahl: Hideo Wada (IBM format) AT&T: Steve Crandall ANL: Robert Harrison Aus.NU: David Hawking Battelle: Rick Kendall Baylor CM: Stanley Hanks BBN: Miles Fidelman Boston U: A. Heddaya Brown: John Savage Clemson: Steve Stevenson Columbia: Yoram Eisenstadter or Ella Sanders Convex: Greg Astfalk Cornell: Doug Elias CoState: Dale Grit (or RRO) Cray: Tim Hoel DDt: Anders Ardo Denelcor (maybe Tera now): B. Smith DEC: Walter Lamia/John Sopka/C. Kiefer [no longer at DEC] Dorian Research: R. Levine DSTO (oz): Charles Watson Duke U.: Mark Jones Emory: V. Sunderam Encore: Peter Fay ETH: R. Ruhl EPFL: Lars Bombolt FPS (defunct): Tom Bauer Fr.-Alex. Univ. Erlangen-Nurnberg: J. Kleinoder Fujitsu America, Inc.: Ken Muira GaTech: Karsten Schwan (Gene Spafford) GEC, NY: David O'Hallaron GMD MBH: Ernst-Joachim Busse GMD First: Diantong Liu HaL: Dennis Allison Harvard U: Stravos Macrakis Horizon RI: Craig Hughes Hope College: M. Jipping IBM: F. Darema Indiana U: Dennis Gannon Inst. di Disica Cosmica, SIAM: G. Boella INRIA: Jean-Jacques Levy ISS: Jit Biswas Intermetrics: William White JVNC: Bruce Bathurst Loral: Ian Kaplan (Defunct) Katholieke Univ. Leuven: Prof. D. Roose KSR: M. Presser Martin Marietta Energy (OR,TN): Richard Hicks MCC: ??? Mitre: Thomas Gerasch MIT: Rich Lethin Maspar: Peter Christy [now at SUN, Kaplan might be able to help] Mich. State U: Richard Enbody Minn.SC: Dennis Lienke Miss. State U: Donna Reese Monash U: W. Brown/Sim Or/Peter Sember Motorola: Fred Segovich Myrias: Jean Andruski (defunct) NAG: P. Mayes NM Tech: G. Francia, III NOSC: H. Smith Northrup: Jeff Crameron NYU: Allen Gottlieb OhioState: Jeff Martens OrGI: Robbie Babb (Dave DiNucci, defunct) OrSU: Youfend Wu Purdue: Andrew Royappa Rice: Ken Kennedy Rutgers: A. Gerasoulis Rutherford Appleton: David Greenaway Santa Clara U.: Hasan AlKhatib Schlumberger: Peter Highnam SGI: Jim Denhert SMU: I. Gladwell SWRI: Richard Murphy SRI: Cliff Isberg Stanford: Byron Davies/M. Flynn/V. Pratt/J. Hennessey/Andy Tucker Stony Brook: L. Wittie NPAC/Sycrause: Bill O'Farrell SSI: ??? SUN: Lisa Steiner/Bob Birss TAI: Lisa Vander Sluis TI: Bryon Davies TMC: Robin Perera Ultra: Bill Overstreet U. Adel.: Bruce Tonkin U. Ala.: Steve Wixson U. at Albany: Steven Sutphen Univ. AZ: Peter Wolcott/Matthew Saltzman UBC: Donald Acton U. Buff.: John Case UCLA: ??? UCSD: Greg Hidley UCF: Narsingh Deo/Shivakumar Sastry UCDublin: John Dunnion UCo, Boulder: Lloyd Fosdick (Mike Schwartz) U Del.: Gary Delp/Dave Farber U. Edinburgh: Tom Stimerling/Richard Eyre-Todd/R. Ibbett U. Exeter: Patrick Lidstone Univ. Fed. de Minas Gerais: Marcio Luiz Bunte de Carvalho UFl: G. Fischer U. Ha.: Tim Brown UHo: Francis Kam UId.: Howard Demuth UIll.: Steve Turner/Dan Reed U. Kaiser.: Gerhard Zimmermann/ F.R. Abmann U. Lan.: Vince Aragon/ C. D. Paice U. Mel.: K. Forward Univ. NM: krishna@rye.cs.unm.edu/Art St. George U Minn.: Gary Elsesser/Steven Miller UNL: Ashok Samal U. Notre Dame: Karl Heubaum UNC: Bruce Smith UNTx: Roy Jacob U. Pitt.: Mary Lou Soffa U. Qu.: V. L. Narasimhan U. Roch.: Cesar Quiroz U. So. Cal: Les Gasser U. So. Car: John Bowles/David Walker U Strathclyde: Magnus Luo Univ. Stuttgart: Joachim Maier U. Tx Austin: Vipin Kumar U. Tx Dallas: Eliezer Dekel/Leslie Crawford U. Trond.: Petter Moe U U: Armin Liebchen UWa: Jean Loup Baer U Waterloo: Peter Bain U. Wis.: Gregory Moses Utrecht U.: Lex Wolters Wa. St. U: Alan Genz W. Mi. U: John Kapenga Yale U.: Miriam Putterman BRL?: Curt Levey (host former brunix) Wa. U (SL. Mo): Fred Rosenberger Zentralinst. fur Ang. Math.: J. Fr. Hake paulo rosado Others Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: walker@rios2.epm.ornl.gov (David Walker) Subject: SHPCC94 Call for Papers reminder THE 1994 SCALABLE HIGH PERFORMANCE COMPUTING CONFERENCE SHPCC94 DEADLINE FOR EXTENDED ABSTRACTS OF PAPERS: November 1, 1993 KNOXVILLE, TENNESSEE, U.S.A., MAY 23 - 25, 1994 GENERAL CHAIR: Jack Dongarra University of Tennessee and Oak Ridge National Laboratory dongarra@cs.utk.edu 615 974-8296 (fax) PROGRAM CHAIR: David W. Walker Oak Ridge National Laboratory walker@msr.epm.ornl.gov 615 574-0680 (fax) PROGRAM COMMITTEE: David Bailey, NASA Ames Research Center William Gropp, Argonne National Laboratory Rolf Hempel, Gesellschaft fur Mathematik und Datenverarbeitung, Germany Anthony Hey, University of Southampton Charles Koelbel, Rice University Steve Otto, Oregon Graduate Institute Cherri Pancake, Oregon State University Paul Pierce, Intel Supercomputer Systems Division Sanjay Ranka, Syracuse University Gary Sabot, Thinking Machines Corporation Robert Schreiber, NASA RIACS Bernard Tourancheau, LIP, CNRS, Ecole Normale Superieure de Lyon, France Robert van de Geijn, University of Texas, Austin Katherine Yelick, University of California, Berkeley SPONSORED BY: IEEE Computer Society The 1994 Scalable High Performance Computing Conference (SHPCC94) is a continuation of the highly successful Hypercube Concurrent Computers and Applications (HCCA), and Distributed Memory Concurrent Computing (DMCC) conference series. SHPCC takes place biennially, alternating with the SIAM Conference on Parallel Processing for Scientific Computing. INVITED SPEAKERS: Guy Blelloch, Carnegie Mellon University Phil Colella, University of California, Berkeley David Culler, University of California, Berkeley Monica Lam, Stanford University Marc Snir, IBM T.J. Watson Research Center SHPCC94 will provide a forum in which researchers in the field of high performance computing from government, academia, and industry can presents results and exchange ideas and information. SHPCC94 will cover a broad range of topics relevant to the field of high performance computing. These topics will include, but are not limited to, the following; Architectures Load Balancing Artificial Intelligence Linear Algebra Compilers Neural Networks Concurrent Languages Non-numerical Algorithms Fault Tolerance Operating Systems Image Processing Programming Environments Large-scale Applications Scalable Libraries C++ THE SHPCC94 program will include invited talks, contributed talks, posters, and tutorials. SHPCC94 will take place at the Holiday Inn Convention Center in Knoxville, Tennessee. Registration details will be made available later. Instructions for Submitting Papers ---------------------------------- Authors are invited to submit contributed papers describing original work that makes a significant contribution to the design and/or use of high performance computers. All contributed papers will be refereed by at least three qualified persons. All papers presented at the conference will be published in the Conference Proceedings. 1. Submit 3 copies of an extended abstract of approximately 4 pages. Abstracts should include a succinct statement of the problems that are considered in the paper, the main results achieved, an explanation of the significance of the work, and a comparison with past research. To ensure a high academic standard, the abstracts of all contributed papers will be refereed. DEADLINE FOR EXTENDED ABSTRACTS OF PAPERS: November 1, 1993 Authors will be notified of acceptance by January 14, 1994 DEADLINE FOR FINAL CAMERA-READY COPY OF COMPLETE PAPER: February 14, 1994 The final complete paper should not exceed 10 pages. 2. Each copy of the extended abstract should have a separate title page indicating that the paper is being submitted to SHPCC94. The title page should also give the title of the paper and the names and addresses of the authors. The presenting author, and the author to whom notification of acceptance should both be sent, should be clearly indicated on the title page, together with their phone, fax, and email. 3. Extended abstracts should be sent to the Program Chair, David Walker, at the address above. Poster Presentations -------------------- Poster presentations are intended to provide a more informal forum in which to present work-in-progress, updates to previously published work, and contributions not suited for oral presentation. To submit a poster presentation send a short (less than one page) abstract to the Program Chair, David Walker, at the address above. Poster presentations will not appear in the Conference Proceedings. DEADLINE FOR SHORT ABSTRACTS OF POSTERS: November 1, 1993 Poster presenters will be notified of acceptance by January 14, 1994 Abstracts for poster presentations must include all the information referred to in (2) above. If this will not fit on the same page as the abstract, then a separate title page should be provided. Instructions for Proposing a Tutorial ------------------------------------- Half-day and full-day tutorials provide opportunity for a researchers and students to expand their knowledge in specific areas of high performance computing. To propose a tutorial, send a description of the tutorial and its objectives to the Program Chair, David Walker, at the address above. The tutorial proposal should include: 1. A half-page abstract giving an overview of the tutorial. 2. A detailed description of the tutorial, its objectives, and intended audience. 3. A list of the instructors, with a brief biography of each. All tutorials will take place on May 22, 1994. DEADLINE FOR TUTORIAL PROPOSALS: November 1, 1993 Tutorial proposers will be notified of acceptance by January 14, 1994 For further information contact David Walker at walker@msr.epm.ornl.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wang@astro.ocis.temple.edu (Jonathan Wang ( the-wang )) Subject: Papers on Fault Tolerance? Organization: Temple University I am looking for papers on fault tolerance in tuple space implementation (as opposed to message passing implementation). Any help is appreciated. Thanks in advance. --wang Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: lancer@cs.montana.edu (Lance Kind) Newsgroups: comp.arch,comp.ai,comp.ai.neural-nets,comp.parallel Subject: Re: Publication Announcement for PARALLEL PROCESSING by Moldovan Organization: Computer Science, MSU, Bozeman MT, 59717 In my personal opinion, its a pretty good book. We are currently using this book for our // processing class. ==>Lancer--- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: blight@eeserv.ee.umanitoba.ca (David C Blight) Subject: SUMMARY : Message Passing Simulators Organization: Electrical Engineering, U of Manitoba, Winnipeg, Manitoba, Canada Origonal Post : I am curious if anyone knows of any freely available message-passing simulators. I am not looking for anything specific as i am just curious about what is available if anything. I don't know if there is any interest in this sort of software. I have done what i suspect most people do, wrote my own simulator for the algorithms I am interested in. I am hoping that maybe there is some simulators available that will do common routing algorithms (worm-hole and that sort of stuff) Summary : I received responses about 4 different programs I have made all these program available for anonymous ftp on ftp.ee.umanitoba.ca in /pub/parallel Netsim program : (titan.cs.rice.edu : public/parallel/sim.tar.Z) Currently available: YACSIM - A process-oriented discrete-event simulator implemented as an extension of the C programming language. This simulator has been used extensively for several years. It is stable and relatively free of bugs. There is a reference manual included in the package. NETSIM - A general-purpose interconnection network simulator implemented as an extension of YACSIM. This simulator is relatively new and has only recently been made available outside Rice. It has not been used much and almost certainly contains some bugs, although we have fixed all we know about. There is a reference manual included in the package. DBSIM - A debugging utility for use with any of the simulators. This program is operational and documented. There are no known bugs, but it has not been extensively tested. Not yet available: MEMSIM - A cache and shared address space memory simulator implemented as an extension of YACSIM. This simulator is currently being extensively revised and is not yet avaiable. Our goal is to have a version we can distribute by the end of the summer of 1993. PARCSIM - A parallel architecture simulator implemented as an extension of YACSIM and including both YACSIM and MEMSIM. Parts of PARCSIM are operational but it is not complete and not yet ready for distribution. We hope to have a version that includes NETSIM, but not MEMSIM, available by the middle of the summer of 1993. It would be suitable for simulating distributed memory systems. Chaos router simulator (shrimp.cs.washington.edu in ~ftp/pub/chaos/simulator) The Chaos router simulator is available via anonymous ftp from shrimp.cs.washington.edu in ~ftp/pub/chaos/simulator. It is written in several thousand lines of ANSI C code, and is known to run on MIPS, SPARC, and ALPHA architectures. mpsim This is Felix Quevedo's Multiprocessor simultator package which he wrote here at the University of Miami. I've provided a nroff'ed version of his documentation, for those without the ms macros, or perhaps without *roff itself. The package was developed on a Sun 3 running SunOS 3.5 and has been tested on Sun 4 running SunOS 4.0.3, a vax running Ultrix 3.0, and a mac running A/UX 1.1 The Makefile needs to be changed for non-SunOS systems. MIT LCS Advanced Network Architecture group's network simulator This directory contains the MIT LCS Advanced Network Architecture group's network simulator. The simulator is an event-driven simulator for packet networks with a simple X window interface to allow interactive use. Thanks to all who responded Dave Blight blight@ee.umanitoba.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.hp.hardware,comp.parallel,aus.general,melb.general,aus.jobs,misc.jobs.misc From: rich@latcs1.lat.oz.au (Rich Taylor) Subject: Disk Arrays & Disk Mirroring Organization: Comp Sci, La Trobe Uni, Australia A friend of mine is in the process of setting up an Ethernet LAN with HP(Hewlett Packard) Unix machines using 30 Gigs disk arrays and disk mirroring. He would like to hear from people who has set up similiar configurations. Basicially he would to get some idea of the issues, PROBLEMS & experiences in dealing with such a configuration (disk arrays & disk mirroring). I'll be happy to post a summary if i get any replies at all. P/S.He might even offer you a job with the company. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.theory,comp.arch,aus.parallel From: hossam@mocha.newcastle.edu.au (Hossam Elgindy) Subject: REMINDER: Reconfigurable Architectures Workshop Organization: Uni of Newcastle, Australia ***************************************************************************** ***************************************************************************** **REMINDER -- REMINDER -- REMINDER -- REMINDER -- REMINDER -- REMINDER** **REMINDER -- REMINDER -- REMINDER -- REMINDER -- REMINDER -- REMINDER** ***************************************************************************** ***************************************************************************** 8th International Parallel Processing Symposium Reconfigurable Architectures Workshop (Sponsored by IEEE Technical Committee on Parallel Processing) SYMPOSIUM: The 8th annual International Parallel Processing Symposium (IPPS '94) will be held April 26-29, 1994 at Hotel Regina, Cancun, Mexico. The symposium is sponsored by the IEEE Computer Society and will be held in cooperation with ACM SIGARCH. IPPS '94 is a forum for engineers and scientists from around the world to present the latest research findings in all aspects of parallel processing. WORKSHOP: The workshop on reconfigurable architectures (RAW'94) will be held on the first day of the Symposium (April 26). For the purpose of this workshop, a reconfigurable architecture "RA" consists of processors connected by a reconfigurable network. The topology of the network outside the processors is fixed, and the internal connections between the I/O ports of each processor can be configured locally during execution of the algorithm. Numerous algorithms have since been presented in the scientific literature. They have addressed problems in computer vision, packet routing, embedding of different topologies and fault tolerance. The workshop will feature several sessions of submitted paper presentations and proceedings will be available at the symposium and by public ftp. Authors are invited to submit manuscripts which demonstrate original research in all areas of Reconfigurable Architectures, implementations, algorithms and applications. The topics of interest include, but are not limited to: Reconfiguration Models Implementations and Systems Complexity Sorting and Packet Routing Scalability Embedding of Fixed Topologies Problem Solving Paradigms Image Processing Geographic Information Systems Graphics and Animation Algorithms(arithmetic/geometric/graph/numerical/randomised) SUBMITTING PAPERS: All papers will be reviewed. Send five (5) copies of complete paper (not to exceed 15 single spaced, single sided pages) to: RAW'94 c/o Hossam ElGindy Department of Computer Science The University of Newcastle Callaghan, NSW 2308 Australia E-mail: raw94@cs.newcastle.edu.au IMPORTANT DATES Manuscripts (by postal services) 12 November 1993. Manuscripts (by e-mail in PostScript) 5 November 1993. Notification of review decisions 31 December 1993. Final version due 11 February 1994. WORKSHOP CHAIR: Hossam ElGindy, The University of Newcastle (AUSTRALIA) E-mail: raw94@cs.newcastle.edu.au, FAX: +61 49 21 6929 PROGRAM COMMITTEE: Hussein Alnuweiri University of British Columbia (Canada) Gen-Huey Chen National Taiwan University (Taiwan) Hossam ElGindy The University of Newcastle (Australia) Ju-wook Jang Samsung Electronics, Co.Ltd. (Korea) Philip MacKenzie University of Texas -- Austin (USA) Koji Nakano Hitachi, Ltd. (Japan) Stephan Olariu Old Dominion University (USA) Viktor K. Prasanna Univerity of Southern California (USA) U. Ramachandran Georgia Institute of Technology (USA) Sartaj Sahni University of Florida (USA) Arun Somani University of Washington (USA) R. Vaidyanathan Louisiana State Univerity (USA) CANCUN, MEXICO: The Yucatan peninsula with a shoreline of over 1600 kilometers is one of Mexico's most exotic areas. Over a thousand years ago the peninsula was the center of the great Mayan civilization. Cancun with it's powder fine sand and turquoise water is a scenic haven for sun lovers and archaeological buffs alike, and our Mexican hosts are eager to extend every hospitality for our visit to their part of the world. Air travel to Cancun is available from most major U.S. cities, and U.S. and Canadian citizens do not require passports to visit Mexico. The The Westin Regina Resort, Cancun (Hotel Regina) is a self-contained meeting facility with spacious, air-conditioned rooms, on-site restaurants, and all the services of a world class hotel. Travel packages to various other nearby hotels (including reduced airfare, and accommodation) are also available from most travel agents. Cancun is a dazzling resort with golf, tennis, and every water sport under the sun, and the area offers exciting nightlife, fabulous shopping, and historic Mayan ruins. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: schooler@apollo.hp.com (Richard Schooler) Subject: Cray T3D Cache Coherence Organization: HP/Apollo Massachusetts Language Lab I've just been reading the Cray T3D Architecture Overview, and I'm wondering how cache coherence works. The impression I get is that there is no complete hardware support: it's up to the programmer to use memory barriers after writes, and to specify non-caching reads. Is that true? -- Richard schooler@ch.hp.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rminnich@super.org (Ronald G Minnich) Subject: REAL numbers on SP/1 latency Organization: Supercomputing Research Center, Bowie, MD OK, I keep hearing this 500 nsec. number for SP/1 node to node latency, and I can't take it any more. So could a (non-IBM-marketing) source out there please: 1) MEASURE the round-trip time, application to application, application to be run in user mode, language of your choice, on an SP/1, for two different nodes, and 2) report it here? thanks ron -- rminnich@super.org (301)-805-7451 or 7312 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: curtin@btgmax.zko.dec.com (Paul Curtin) Subject: Thinking Machines CM2 for sale or lease. Reply-To: curtin@btgmax.zko.dec.com (Paul Curtin) Organization: Digital Equipment Corp. TO: HIGH PERFORMANCE SYSTEMS USERS FROM: OLD COLONY GROUP LEASING, INC. DATE: SEPTEMBER 20, 1993 RE: MASSIVELY PARALLEL PROCESSOR Old Colony Group Leasing, Inc. has available or sale or lease a Thinking Machine 16,000 processor CM2 computer with a 64 bit floating point and 10 gigabyte data vault for as little as $1.00 per processor per month. This system is also available immediately at a purchase price of $50.00 per processor. For more information regarding configuration and lease terms, please call either myself or Joe Bonanno at (800) 447-7728. Old Colony Group Leasing is a woman-owned business enterprise. OLD COLONY GROUP LEASING, INC. Ellen F. Kennedy, President Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: saroff@msc.edu (Stephen Saroff MSCI/AHPCRC) Sender: saroff@ea.msc.edu Subject: SP-1 information Does anyone have a quick overview on the new IBM MPP system? -- Stephen Saroff: saroff@msc.edu -- Minnesota SupercomputerCenter,Inc -- 1200 WashingtonAve S; Minneapolis,MN 55415 -- 612/337 3423 Fax:612/337 3400 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ad@egr.duke.edu (Apostolos Dollas) Subject: RSP 94 Call for Papers/Interest Organization: Duke University EE Dept.; Durham, NC Dear colleague: You will find below the call for papers for RSP 94, the Fifth IEEE International Workshop on Rapid System Prototyping. We also want to establish a mailing list (electronic and regular) of people interested in rapid system prototyping. To be included in this list, please send e-mail to Apostolos Dollas (ad@ee.duke.edu). We will notify you of updates, send you the Workshop program when it is established, and send you the Call for Participation. We do not expect extensive mailings but rather to reach the researchers and practitioners in the field from industry and academia. Your thoughts and comments are welcome. Please pass this information to others who may be interested. CALL FOR PAPERS 5th IEEE INTERNATIONAL WORKSHOP ON RAPID SYSTEM PROTOTYPING June 21-23, 1994 Grenoble (Grand Hotel de Paris at Villard de Lans), France Requirements, Specifications, Integration and Prototyping of Hardware and Software for Computer Based Systems The IEEE International Workshop on Rapid System Prototyping presents and explores the trends in rapid prototyping of Computer Based Systems including, but not limited to, communications, information, and manufacturing systems. The fifth annual workshop will focus on improved approaches to resolving prototyping issues and problems raised by incomplete specifications, increased system complexity and reduced time to market requirements for a multitude of products. The workshop will include a keynote presentation and formal paper sessions with a wide range of system prototyping topics, which include, but are not limited to the following: * Development of system requirements * Prototyping case studies * Requirements interpretation * Very large scale system engineering * Specification consistency checking * Hardware/software tradeoffs * Tools for hardware prototyping * System verification/validation * Tools for software prototyping * Design specification * The role of FPGAs in system prototyping * Prototype to product transition The program committee invites authors to submit five copies of an extended summary or a full paper (preferred) presenting original and unpublished work. Clearly describe the nature of the work, explain its significance, highlight its novel features, and state its current status. Authors of selected papers will be requested to prepare a manuscript for the workshop proceedings. The official language of the workshop is English. * Papers due: January 10, 1994 * Notification of Acceptance: February 10, 1994 * Final Camera Ready Manuscript due March 24, 1994 Submit all papers to Dr. Nick Kanopoulos, program chair (address below). The workshop will be held in the resort town of Villard de Lans (near Grenoble) in France. Transportation between the workshop site and Grenoble will be provided for attendees arriving and departing on June 20th and 23rd, respectively. Workshop Chairperson: Bernard Courtois INPG/TIMA 46 Avenue Felix Viallet 38031 Grenoble Cedex France 33-76-574-615 Fax: 33-76-473-814 courtois@archi.imag.fr Program Chairperson: Nick Kanopoulos Center for Digital Sys. Engineering Research Triangle Institute 3040 Cornwallis Road Research Triangle Park, NC 27709 (919) 541-7341 Fax: (919) 541-6515 rsp@rti.rti.org Asian Contact: Akihiko Yamada Tokyo Metropolitan University Hachioji, Tokyo Japan +81 426 77 2748 Fax: +81 426 77 2756 Chairman Emeritus: Ken Anderson Program Committee: Theo Antonakopoulos (Univ. of Patras), John Beetem (MITRE), Warren Debany (Rome Laboratory), Apostolos Dollas (Duke), Paul Hulina (Penn. State Univ.), Rudy Lauwereins (Katholieke Universiteit Leuven), Stanley Winkler (NIST), Mark Engels (Katholieke Universiteit Leuven), Manfred Glesner (Technische Hoschschule Darmstadt), Ahmed Amine Jerraya (INPG/TIMA), Peter Henderson (University of Southampton) Cosponsored by: IEEE Computer Society Technical Committees on: Design Automation * Simulation * Test Technology Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: elammari@scs.carleton.ca (M. Elammari) Subject: Finding smallest value on Hypercubes Organization: Carleton University I am lookin for parallel algorithms for finding the smallest value on Hypercubes. Any help/references will be appreciated. Thank you in advance. -- M. Elammari Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Mail Problems and News Problems For the past two weeks, Eugene Miya and I have been tracking problems in news distribution. At first, it appeared to be localized and just comp.parallel. Our system people assure me that the news is clearing Clemson properly. Once it gets outside us, there's really no way to deal directly with tracking articles. Eugene now tells me that the problem is not just in comp.parallel. If you are experiencing difficulties, please contact your system people. If you have posted something and it has not been seen yet, please let me know. Steve =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kumar-b@cis.ohio-state.edu (bharat kumar) Subject: Ordering in sparse Cholesky factorization Organization: The Ohio State University Dept. of Computer and Info. Science I'm looking for papers on ordering of symmetric positive definite matrices to minimize fill-in and maximize parallelism, and the mapping of computation to processors. Please send email to kumar-b@cis.ohio-state.edu Thanks in advance, Bharat Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stratton@dcs.warwick.ac.uk (Andrew Stratton) Subject: Here is the LaTeX source for `Twelve ways to fool the masses..' Organization: Department of Computer Science, Warwick University, England David Bailey very kindly sent me a LaTeX copy of the report `Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers'. I enclose the source, with his permission, below. Thank you to every one for their help, especially David Bailey, Andy Stratton --------------------------------------------------------------------- \documentstyle[12pt,fleqn]{article} \setlength{\oddsidemargin}{0cm} \setlength{\textwidth}{16.2cm} \setlength{\columnwidth}{16.2cm} \setlength{\topmargin}{0cm} \setlength{\textheight}{21.5cm} \begin{document} \vspace*{2.7cm} \begin{large} \begin {center} Twelve Ways to Fool the Masses When Giving \\ Performance Results on Parallel Computers \\ David H. Bailey \\ RNR Technical Report RNR-91-020 \\ June 11, 1991 \end{center} \end{large} \vspace{3ex} \noindent {\bf Abstract} Many of us in the field of highly parallel scientific computing recognize that it is often quite difficult to match the run time performance of the best conventional supercomputers. This humorous article outlines twelve ways commonly used in scientific papers and presentations to artificially boost performance rates and to present these results in the ``best possible light'' compared to other systems. \vfill{ The author is with the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center, Moffett Field, CA 94035.} \newpage Many of us in the field of highly parallel scientific computing recognize that it is often quite difficult to match the run time performance of the best conventional supercomputers. But since lay persons usually don't appreciate these difficulties and therefore don't understand when we quote mediocre performance results, it is often necessary for us to adopt some advanced techniques in order to deflect attention from possibly unfavorable facts. Here are some of the most effective methods, as observed from recent scientific papers and technical presentations: \vspace{2ex} \noindent {\bf 1. Quote only 32-bit performance results, not 64-bit results.} We all know that it is hard to obtain impressive performance using 64-bit floating point arithmetic. Some research systems do not even have 64-bit hardware. Thus always quote 32-bit results, and avoid mentioning this fact if at all possible. Better still, compare your 32-bit results with 64-bit results on other systems. 32-bit arithmetic may or may not be appropriate for your application, but the audience doesn't need to be bothered with such details. \vspace{2ex} \noindent {\bf 2. Present performance figures for an inner kernel, and then represent these figures as the performance of the entire application.} It is quite difficult to obtain high performance on a complete large-scale scientific application, timed from beginning of execution through completion. There is often a great deal of data movement and initialization that depresses overall performance rates. A good solution to this dilemma is to present results for an inner kernel of an application, which can be souped up with artificial tricks. Then imply in your presentation that these rates are equivalent to the overall performance of the entire application. \vspace{2ex} \noindent {\bf 3. Quietly employ assembly code and other low-level language constructs.} It is often hard to obtain good performance from straightforward Fortran or C code that employs the usual parallel programming constructs, due to compiler weaknesses on many highly parallel computer systems. Thus you should feel free to employ assembly-coded computation kernels, customized communication routines and other low-level code in your parallel implementation. Don't mention such usage, though, since it might alarm the audience to learn that assembly-level coding is necessary to obtain respectable performance. \vspace{2ex} \noindent {\bf 4. Scale up the problem size with the number of processors, but omit any mention of this fact.} Graphs of performance rates versus the number of processors have a nasty habit of trailing off. This problem can easily be remedied by plotting the performance rates for problems whose sizes scale up with the number of processors. The important point is to omit any mention of this scaling in your plots and tables. Clearly disclosing this fact might raise questions about the efficiency of your implementation. \vspace{2ex} \noindent {\bf 5. Quote performance results projected to a full system.} Few labs can afford a full-scale parallel computer --- such systems cost millions of dollars. Unfortunately, the performance of a code on a scaled down system is often not very impressive. There is a straightforward solution to this dilemma --- project your performance results linearly to a full system, and quote the projected results, without justifying the linear scaling. Be very careful not to mention this projection, however, since it could seriously undermine your performance claims for the audience to realize that you did not actually obtain your results on real full-scale hardware. \vspace{2ex} \noindent {\bf 6. Compare your results against scalar, unoptimized code on Crays.} It really impresses the audience when you can state that your code runs several times faster than a Cray, currently the world's dominant supercomputer. Unfortunately, with a little tuning many applications run quite fast on Crays. Therefore you must be careful not to do any tuning on the Cray code. Do not insert vectorization directives, and if you find any, remove them. In extreme cases it may be necessary to disable all vectorization with a command line flag. Also, Crays often run much slower with bank conflicts, so be sure that your Cray code accesses data with large, power-of-two strides whenever possible. It is also important to avoid multitasking and autotasking on Crays --- imply in your paper that the one processor Cray performance rates you are comparing against represent the full potential of a \$25 million Cray system. \vspace{2ex} \noindent {\bf 7. When direct run time comparisons are required, compare with an old code on an obsolete system.} Direct run time comparisons can be quite embarrassing, especially if your parallel code runs significantly slower than an implementation on a conventional system. If you are challenged to provide such figures, compare your results with the performance of an obsolete code running on obsolete hardware with an obsolete compiler. For example, you can state that your parallel performance is ``100 times faster than a VAX 11/780''. A related technique is to compare your results with results on another less capable parallel system or minisupercomputer. Keep in mind the bumper sticker ``We may be slow, but we're ahead of you.'' \vspace{2ex} \noindent {\bf 8. If MFLOPS rates must be quoted, base the operation count on the parallel implementation, not on the best sequential implementation.} We know that MFLOPS rates of a parallel codes are often not very impressive. Fortunately, there are some tricks that can make these figures more respectable. The most effective scheme is to compute the operation count based on an inflated parallel implementation. Parallel implementations often perform far more floating point operations than the best sequential implementation. Often millions of operations are masked out or merely repeated in each processor. Millions more can be included simply by inserting a few dummy loops that do nothing. Including these operations in the count will greatly increase the resulting MFLOPS rate and make your code look like a real winner. \vspace{2ex} \noindent {\bf 9. Quote performance in terms of processor utilization, parallel speedups or MFLOPS per dollar.} As mentioned above, run time or even MFLOPS comparisons of codes on parallel systems with equivalent codes on conventional supercomputers are often not favorable. Thus whenever possible, use other performance measures. One of the best is ``processor utilization'' figures. It sounds great when you can claim that all processors are busy nearly 100\% of the time, even if what they are actually busy with is synchronization and communication overhead. Another useful statistic is ``parallel speedup'' --- you can claim ``fully linear'' speedup simply by making sure that the single processor version runs sufficiently slowly. For example, make sure that the single processor version includes synchronization and communication overhead, even though this code is not necessary when running on only one processor. A third statistic that many in the field have found useful is ``MFLOPS per dollar''. Be sure not to use ``sustained MFLOPS per dollar'', i.e. actual delivered computational throughput per dollar, since these figures are often not favorable to new computer systems. \vspace{2ex} \noindent {\bf 10. Mutilate the algorithm used in the parallel implementation to match the architecture.} Everyone is aware that algorithmic changes are often necessary when we port applications to parallel computers. Thus in your parallel implementation, it is essential that you select algorithms which exhibit high MFLOPS performance rates, without regard to fundamental efficiency. Unfortunately, such algorithmic changes often result in a code that requires far more time to complete the solution. For example, explicit linear system solvers for partial differential equation applications typically run at rather high MFLOPS rates on parallel computers, although they in many cases converge much slower than implicit or multigrid methods. For this reason you must be careful to downplay your changes to the algorithm, because otherwise the audience might wonder why you employed such an inappropriate solution technique. \vspace{2ex} \noindent {\bf 11. Measure parallel run times on a dedicated system, but measure conventional run times in a busy environment.} There are a number of ways to further boost the performance of your parallel code relative to the conventional code. One way is to make many runs on both systems, and then publish the best time for the parallel system and the worst time for the conventional system. Another is to time your parallel computer code on a dedicated system and time your conventional code in a normal loaded environment. After all, your conventional supercomputer is very busy, and it is hard to arrange dedicated time. If anyone in the audience asks why the parallel system is freely available for dedicated runs, but the conventional system isn't, change the subject. \vspace{2ex} \noindent {\bf 12. If all else fails, show pretty pictures and animated videos, and don't talk about performance.} It sometimes happens that the audience starts to ask all sorts of embarrassing questions. These people simply have no respect for the authorities of our field. If you are so unfortunate as to be the object of such disrespect, there is always a way out --- simply conclude your technical presentation and roll the videotape. Audiences love razzle-dazzle color graphics, and this material often helps deflect attention from the substantive technical issues. \vspace{3ex} \noindent {\bf Acknowledgments} The author wishes to acknowledge helpful contributions and comments by the following persons: R. Bailey, E. Barszcz, R. Fatoohi, P. Frederickson, J. McGraw, J. Riganati, R. Schreiber, H. Simon, V. Venkatakrishnan, S. Weeratunga, J. Winget and M. Zosel. \end{document} Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: The Future of Parallel Computing Organization: Professional Student, University of Maryland, College Park References: <1993Oct14.131912.9126@hubcap.clemson.edu> In article <1993Oct14.131912.9126@hubcap.clemson.edu> Peter Su writes: >Or, do I, for the purposes of benchmarking, have to regard the >vendor's brain-dead compiler as part of the system. Aren't we trying >to figure out how good the *hardware* is, not the hardware+compiler? This brings up an interested question about the direction that the field of parallel computing is headed towards. That is, is the hardware growing too fast and neglecting the software and algorithmic concerns? When we write an algorithm in a sequential programming language such as ANSI C or Fortran 77, our code is portable to all "similar" machines. The compiler has the job of converting the high-level representation into a machine code which will run as efficiently as possible on the underlying hardware, whether it is a RISC or CISC processor, for example. Also, as hardware is upgraded, the algorithmic representations do not need to be rewritten. Let's face it, after we create a large application, we do not want to have to rewrite it every time we get the "latest" machine, or even the next generation of our current series of machine. Is the same true for the emergence of parallel computing? In my opinion, no. We have not ground out a "standard" representation for the model of parallel computing. We have not put enough effort into the theory of parallel algorithmics. Throwing faster hardware at us will not solve the problem. Even if the benchmark time for a given application is cut in half, what happens as we try to increase the problem size by a factor of K ? The compiler then must have the task of decomposing the algorithm onto the underlying hardware. It is just wrong to require the programmer to have a detailed knowledge of the hardware, data layout, and compiler tricks just to get anywhere near "benchmarked" performance rates. We are now in an age when the high performance machines have various data network topologies, i.e. meshes, torii, linear arrays, vector processors, hypercubes, fat-trees, switching networks, etc.. etc.. These parallel machines might all have sexy architectures, but we are headed in the wrong direction if we don't take a step back and look at the future of our work. We shouldn't have to rewrite our algorithms from scratch each time our vendor sells us the latest hardware with amazing benchmarks. Benchmarks should also be attainable from STANDARD compiler options. We should NOT have to streamline routines in assembly language, give data layout directives, nor understand the complexities of the hardware and/or data network. Please let me know what you think, Thanks, david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: O.Naim@ecs.soton.ac.uk (Oscar Naim) Subject: Are there any nCUBE/3200? Organization: Electronics and Computer Science, University of Southampton Hi all, Does anybody know if the nCUBE/3200 model exists? I would appreciate very much any information about the models availables from this company. Thanks in advance! Oscar. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fsang@kira.lerc.nasa.gov (Angela Quealy) Subject: DQS users? Organization: NASA Lewis Research Center [Cleveland, Ohio] We recently acquired DQS for use at our site, and we are trying to come up with a policy for its use on our dedicated cluster of 32 IBM RS6000s. Our users will submit a variety of parallel and serial jobs to this test-bed environment, including test/debug runs, benchmark runs, and production jobs which could run for a full month or more. We are using both the APPL and PVM message passing libraries. I was wondering what other sites are using DQS, what your experience has been so far, and what kind of queue configuration/policy you are using. Also, in what kind of environment are you running DQS? (a dedicated cluster, or a loose cluster of individually-owned workstations?) Angela Quealy quealy@lerc.nasa.gov -- *********************************************************************** * Angela Quealy quealy@lerc.nasa.gov * * Sverdrup Technology, Inc. (216) 977-1297 * * NASA Lewis Research Center Group * *********************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hbchen@cse.uta.edu (Hsing B Chen) Subject: CFP, 1993 IEEE 5th Sym. on Parallel and Distributed Processing Organization: Computer Science Engineering at the University of Texas at Arlington ===================================================================== Call for participation IEEE 5th SPDP, 1993 Dallas, Texas ===================================================================== FIFTH IEEE SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING Sponsors: IEEE-Computer Society and IEEE-CS-Dallas Chapter Omni Mandalay Hotel, Irving, Texas - December 1-4, 1993 This symposium provides a forum for the presentation and exchange of current work on a wide variety of topics in parallel and distributed processing including: Computer Architecture Neural Networks Artificial Intelligence Simulation and Modeling Programming Languages Interconnection Networks Parallel Algorithms Distributed Computing Operating Systems Scheduling VLSI Systems Design Parallel Applications Database and Knowledge-base Systems The technical program will be held on December 1-3, 1993 and the tutorials will be held on December 4, 1993. Tutorials: Full-day Tutorials (December 4: 9:00 am - 5:30 pm): T1: Roles of Optics in Parallel Computing and High-Speed Communications, Ahmed Louri, Univ. of Arizona. T2: Functional Programming, Patrick Miller and John Feo, Lawrence Livermore National Laboratory. Half-day Tutorials (December 4): T3: (9:00 am - 12:30 pm): Instruction Scheduling, Barbara Simons and Vivek Sarkar, IBM Corp. T4: (2:00 pm - 5:30 pm): Software Systems and Tools for Distributed Programming, Anand Tripathi, University of Minnesota. Hotel Reservations: Please place your reservations directly with Omni Mandalay Hotel at Las Colinas, 221 East Las Colinas Blvd., Irving, Texas 75039, Tel: (214) 556-0800, Or (800) 843-6664. You must mention that you are attending SPDP in order to receive the special symposium rate of $94/night for a single or a double room. Please check with the reservations desk for the applicability of other special rates, such as those available to AAA members. Reservations should be made before November 16, 1993. After this date, reservations are subject to space availability. Directions: Omni Mandalay Hotel, the conference site, is located in the Las Colinas development area in the city of Irving (a suburb of Dallas). The hotel is about 10 minutes from the Dallas/Fort Worth (DFW) International Airport. By Car: Take DFW Int'l Airport north exit. Take Highway 114 East towards Dallas, go approximately 8 miles to OConnor Road Exit. Turn left, go two blocks, turn right on Las Colinas Blvd. The hotel will be 200 yards on the left. Shuttle Service: Super Shuttle provides van service from DFW Int'l Airport to the Omni Mandalay for $8.50 per person each way. For more information and reservations, call 817-329-2002. Weather: Dallas weather in early December ranges from low 40's to high 60's Fahrenheit. ++++++++++++++++++++++++++++++++ N City Irving and City Las Colinas W+E ++++++++++++++++++++++++++++++++ S O ----------------------O-------------- | (IRVING City) O | ================Highway 114========= | | __O_____ \ | Dallas City | D/FW | O | \ | | Airport | O (*)| \ | | | O LC| \ | | --O----- _ \| | O (B) \ | O - |\ ===========================O================Highway 183== | O | \ | O | \ | O | \ | O | | O | ------------------------------------ O Legend- LC: City Las Colinas *: Omni Mandalay Hotel (SPDP location) B: Dallas Cowboys Football Stadium (Texas Stadium) O: O'Connor Rd. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PLEASE SEND REGISTRATION FORM AND PAYMENT (payable to SPDP) TO: Dr. Behrooz Shirazi, University of Texas at Arlington, Dept. of Computer Science & Engineering, 416 Yates, Room 300, Arlington, TX 76019, Tel: (817) 273-3605, Fax: (817)273-3784, E-mail: shirazi@cse.uta.edu. The symposium registration includes the symposium proceedings, banquet, and luncheon. Student registration does not include the symposium proceedings or the luncheon. (Advance Registration: Before 11/19/93.) IEEE Members Non-Members Students Advance Registration: Symposium: US$260 US$325 US$100 Full-day Tutorial: US$220 US$275 US$220 Half-day Tutorial: US$110 US$140 US$110 On-site Registration Symposium: US$310 US$385 US$120 Full-day Tutorial: US$265 US$330 US$265 Half-day Tutorial: US$135 US$165 US$135 IEEE Member:____ Non-Member:____ Student:____ IEEE/Student No.:_______________________ Symposium: $_______ Tutorial: $_______, Specify choice of tutorial(s): T____ Total: $_______ _____Check enclosed, USA BANKS ONLY (payable to SPDP) ____Credit Card (VISA or Master Card ONLY) VISA___ or Master Card___ Credit Card No.:____________________________________ Expiration Date:____________ Signature:_____________________________ Last Name:_____________________________ First Name:________________________ Middle Initial:______ Organization: __________________________________________ Address: _____________________________________________ _______________________________________________ City, State,Zip/Country:__________________________________ Phone: ___________________, Fax: ____________________ E-mail:_______________________________ ======================================================================= Technical Program Fifth IEEE Symposium on Parallel and Distributed Processing Sponsors: IEEE-Computer Society and IEEE-CS-Dallas Chapter Omni Mandalay Hotel, Irving, Texas December 1-4, 1993 Note: L: 30-minute Presentation S: 15-minute Presentation Wednesday, December 1, 1993 8:30-9:30 am: On-site Registration - Conference material available at the Registration Desk 9:30-10:00 am: Opening and Awards Session (Mandalay West) 10:00-10:30 am: Break 10:30-12:00 noon Session A-1: Applications & Experimental Results I (Mandalay West) Chair: Simin Pakzad (Penn State University) L: Total Exchange on a Reconfigurable Parallel Architecture - by Yuh-Dauh Lyuu, Eugen Schenfeld L: Experimental Evaluation of Performance and Scalability of a Multiprogrammed Shared-Memory Multiprocessor - by Chitra Natarajan, Ravi Iyer S: Parallel Bidirectional A* Search on a Symmetry Multiprocessor - by Andrew Sohn S: Characterizing Execution Behavior of Application Programs on Network-based Shared-memory Mutiprocessors - by Xiaodong Zhang, Keqiang He, Elisa W. Chan Session B-1: Architecture I (Martaban) Chair: S. Lakshmivarahan (Univ. of Oklahoma) L: The Meerkat Multicomputer - by Robert Bedichek, Curtis Brown L: Correctness of a Directory-Based Cache Coherence Protocol: Early Experience - by Fong Pong, Michel Dubois S: Cache Design for an Explicit Token Store Data Flow Architecture - by P. Shanmugam, Shirish Andhare, Krishna Kavi, Behrooz Shirazi S: Architectural Support for Block Transfers in a Shared Memory Multiprocessor - by Steven J. E. Wilton, Zvonko G. Vranesic Session C-1: Wormhole Routing (Rangoon) Chair: Robert Cypher (IBM Almaden) L: Universal Wormhole Routing - by R. Greenberg and H. C. Oh L: A New Theory of Deadlock-free Adaptive Multicast Routing in Wormhole Networks - by J. Duato L: Adaptive Wormhole Routing in Hypercube Multicomputers - by X. Lin, A-H. Esfahanian, P.K. McKinley, A. Burago 12:00 - 1:30 pm: LUNCH 1:30 - 3:00 pm Session A-2: Storage Management (Mandalay West) Chair: Margaret Eich (SMU) L: Parallel Dynamic Storage Allocation Algorithms - by Arun Iyengar L: Storage Schemes for Parallel Memory Systems: An Approach Based on Circulant Matrices - by Cengiz Erbas, Murat M. Tanik, V. S. S. Nair S: Parallel Garbage Collection and Graph Reducer - by Wen-Yan Kuo, Sy-Yen Kuo Session B-2: Multithreading (Martaban) Chair: Krishna Kavi (NSF) L: An Evaluation of Software Multithreading in a Conventional Distributed Memory Multiprocessor - by Matthew Haines, Wim Bohm L: Analysis of Multithreaded Multiprocessors with Distributed Shared Memory - by Shashank S. Nemawarkar, R. Govindarajan, Guang R. Gao, Vinod K. Agarwal S: RICA: Reduced Interprocessor Communication Architecture - by Shuichi Sakai, Y. Kodama, M. Sato, A. Shaw, et al. Session C-2: Applications I (Rangoon) Chair: Phil Gibbons (ATT Bell Laboratories) L: Solving Markov Chains Using Bounded Aggregation on a Massively Parallel Processor - by R.B. Mattingly L: Direct and Iterative Parallel Methods for Boundary Value Problems - by I. Gladwell, G.Kraut L: Efficient Parallel Sibling Finding for Quadtree Data Structure - by D. Doctor and I. Sudborough 3:00 - 3:30 pm: BREAK 3:30 - 5:00 pm Session A-3: Interconnection Networks/ Routing I (Mandalay West) Chair: Nian-Feng Tzeng (USL) L: Analysis of Interconnection Networks Based on Simple Cayley Coset Graphs - by Jen-Peng Huang, S. Lakshmivarahan, S. K. Dhall L: An Efficient Routing Scheme for Scalable Hierarchical Networks - by Hyunmin Park, Dharma P. Agrawal S: Performance Evaluation of Idealized Adaptive Routing on k-ary n-cubes - by A. Lagman, W. A. Najjar, S. Sur, P. Srimani S: The B&E Model for Adoptable Wormhole Routing - by Xiaowei Shen, Y. S. Cheung Session B-3: Performance Evaluation I (Martaban) Chair: Diane Cook (UT-Arlington) L: Comparative Performance Analysis and Evaluation of Hot Spots on MIN-Based and HR-Based Shared-Memory Architectures - by Xiaodong Zhang, Yong Yan, Robert Castaneda L: Application of Parallel Disks for Efficient Handling of Object- Oriented Databases - by Y. C. Chehadeh, A. R. Hurson, L. L. Miller, B. N. Jamoussi S: The Parallel State Processor Model - by I. Gottlieb, L. Biran Session C-3: Geometric Algorithms (Rangoon) Chair: Cynthia Phillips (Sandia Nat'l Lab) L: Parallel Algorithms for Geometric Problems on Networks of Processors - by J. Tsay L: Optimal Parallel Hypercube Algorithms for Polygon Problems - by M. Atallah, D. Chen L: A Parallel Euclidean Distance Transformation Algorithm - by H. Embrechts, D. Roose 5:00 - 6:00 pm: BREAK 6:00 - 9:00 pm: CONFERENCE RECEPTION (Hors d'oeuvres and Cash Bar) Thursday, December 2, 1993 8:30 - 10:00 am Session A-4: Distributed Systems I (Mandalay West) Chair: Ray Liuzzi (Air Force Rome Labs) L: An Efficient and Reliable Multicast Algorithm - by Rosario Aiello, Elena Pagani, Gian Paolo Rossi L: An Efficient Load Balancing Algorithm in Distributed Computing Systems - by Jea-Cheoul Ryou, Jie-Yong Juang S: Assertions about Past and Future: Communication in a High Performance Distributed System Highways- by Mohan Ahuja S: Protocol Refinement for Maintaining Replicated Data in Distributed Systems - by D. Shou, Sheng-De Wang Session B-4: Performance Evaluation II (Martaban) Chair: Hee Yong Youn (UT-Arlington) L: A Methodology for the Performance Prediction of Massively Parallel Applications - by Daniel Menasce, Sam H. Noh, Satish K. Tripathi L: Determining External Contention Delay Due to Job Interactions in a 2-D Mesh Wormhole Routed Multicomputer - by Dugki Min, Matt W. Mutka L: Simulated Behaviour of Large Scale SCI Rings and Tori - by H. Cha, R. Daniel Jr. Session C-4: Mesh Computations (Rangoon) Chair: Abhiram Ranade (UC- Berkeley) L: Becoming a Better Host Through Origami: a Mesh is More Than Rows and Columns - by D. Greenberg, J. Park, E. Schwabe L: Deterministic Permutation Routing on Meshes - by B. Chlebus, M. Kaufmann, J. Sibeyn S: Dilation-5 Embedding of 3-Dimensional Grids into Hypercubes - by M. Chan, F. Chin, C. N. Chu, W. K. Mak 10:00 - 10:30 am: BREAK 10:30 - 12:00 noon Session A-5: Applications and Experimental Results II (Mandalay West) Chair: Doug Matzke (Texas Instruments) L: Scalable Duplicate Pruning Strategies for Parallel A* Graph Search - by Nihar R. Mahapatra, Shantanu Dutt L: A Parallel Implementation of a Hidden Markov Model with Duration Modeling for Speech Recognition - by C.D. Mitchell, R.A. Helzerman, L.H. Jamieson, M.P. Harper S: Performance Comparison of the CM-5 and Intel Touchstone Delta for Data Parallel Operations - by Zeki Bozkus, Sanjay Ranka, Geoffrey Fox, Alok Choudhary Session B-5: Interconnection Networks/Routing II (Martaban) Chair: Laxmi Bhuyan (Texas A&M Univ.) L: Analysis of Link Traffic in Incomplete Hypercubes - by Nian- Feng Tzeng, Harish Kumar L: Multicast Bitonic Network - by Majed Z. Al-Hajery, Kenneth E. Batcher L: Valved Routing: Implementing Traffic Control in Misrouting on Interconnection Network - by Wei-Kuo Liao, Chung-Ta King Session C-5: Message-Passing Systems (Rangoon) Chair: Sandeep Bhatt (Bellcore) L: Computing Global Combine Operations in the Multi-Port Postal Model - by A. Bar-Noy, J. Bruck, C.T. Ho, S. Kipnis, B. Schieber S: Broadcasting Multiple Messages in Simultaneous Send/Receive Systems - by A. Bar-Noy, S. Kipnis S: Fault Tolerant Broadcasting in SIMD Hypercubes - by Y. Chang S: Notes on Maekawa's O(sqrt N) Distributed Mutual Exclusion Algorithm - by Ye-In Chang 12:00 - 2:00 pm: CONFERENCE LUNCHEON and KEYNOTE SPEECH (Salon D) Stephen L. Squires (Advanced Research Projects Agency) High Performance Computing and National Scale Information Enterprises 2:00 - 3:30 pm Session A-6: Partitioning and Mapping I (Mandalay West) Chair: Jeff Marquis (E-Systems) L: Partitioning and Mapping a Class of Parallel Multiprocessor Simulation Models - by H. Sellami, S. Yalamanchili L: An Efficient Mapping of Feed-Forward with Back Propagation ANNs on Hypercubes - by Q. M. Malluhi, M. A. Bayoumi, T. R. N. Rao S: Data Partitioning for Networked Parallel Processing - by Phyllis E. Crandall, Michael J. Quinn Session B-6: Architecture II (Martaban) Chair: Dharma P. Agrawal (North Carolina State University) L: Designing a Coprocessor for Recurrent Computations - by K. Ganapathy, B. Wah L: Analysis of Control Parallelism in SIMD Instruction Streams - by J. Allen, V. Garg, D. E. Schimmel L: Representation of Coherency Classes for Parallel Systems - by J. A. Keane, W. Hussak Session C-6: Applications II (Rangoon) Chair: Hal Sudborough (UT-Dallas) L: A Parallel Lattice Basis Reduction for Mesh-Connected Processor Arrays and Parallel Complexity - by Ch. Heckler, L. Thiele L: Parallel Network Dual Simplex Method on a Shared Memory Multiprocessor - by K. Thulasiraman, R.P. Chalasani, M.A. Comeau S: Parallel Simulated Annealing by Generalized Speculative Computation - by Andrew Sohn, Zhihong Wu, Xue Jin 3:30 - 4:00 pm: BREAK 4:00 - 5:30 pm Session A-7: Languages I (Mandalay West) Chair: Benjamin Wah (University of Illinois at Urbana-Champaign) L: On the Granularity of Events when Modeling Program Executions - by Eric Leu, Andre Schiper L: Cloning ADT Modules to Increase Parallelism: Rationale and Techniques - by Lonnie R. Welch L: The Design and Implementation of Late Binding in a Distributed Programming Language - by Wenwey Hseush, Gail E. Kaiser Session B-7: Reliability and Fault-Tolerance I (Martaban) Chair: A. Waksman (Air Force) L: Measures of Importance and Symmetry in Distributed Systems - by Mitchell L. Neilsen S: Dependability Analysis for Large Systems: A Hierarchical Modeling Approach - by Teresa A. Dahlberg, Dharma P. Agrawal L: An Adaptive System-Level Diagnosis Approach for Hypercube Multiprocessors - by C. Feng, L. N. Bhuyan, F. Lombardi Session C-7: Distributed Algorithms (Rangoon) Chair: Ioannis Tollis (UT-Dallas) L: How to Share a Bounded Object: A Fast Timing-Based Solution - by R. Alur, G. Taubenfeld L: Using Induction to Prove Properties of Distributed Programs - by V. Garg and A. Tomlinson S: An Optimal Distributed Ear Decomposition Algorithm with Applications to Biconnectivity and Outer Planarity Testing - by A. Kazmierczak, S. Radhakrishnan S: Group Membership in a Synchronous Distributed System - by G. Alari, A. Ciuffoletti Friday, December 3, 1993 8:30 - 10:00 am Session A-8: Compilation (Mandalay West) Chair: Paraskevas Evripidou (SMU) L: Compiling Distributed C++ - by Harold Carr, Robert Kessler, Mark Swanson L: ALIAS Environment: A Compiler for Application Specific Arrays - by James J. Liu, Milos D. Ercegovac L: An Algorithm to Automate Non-Unimodular Transformations of Loop Nests - by Jingling Xue Session B-8: Languages II (Martaban) Chair: Les Miller (Iowa State University) L: Genie: An Environment for Partitioning Mapping in Embedded Multiprocessors - by S. Yalamanchili, L. Te Winkel, D. Perschbacher, B. Shenoy L: Analysis of Affine Communication Specifications - by S. Rajopadhye L: C-Linda Implementation of Distinct Element Model - by Siong K. Tang, Richard Zurawski Session C-8: Fault-Tolerant Communication (Rangoon) Chair: Yanjun Zhang (SMU) L: Multicasting in Injured Hypercubes Using Limited Global Information - by J. Wu, K. Yao L: Fault-Tolerance Properties of deBruijn and Shuffle-Exchange Networks - by M. Baumslag L: Communication Complexity of Fault-Tolerant Information Diffusion - by L. Gargano, A. Rescigno 10:00 - 10:30 am: BREAK 10:30 - 12:00 noon Session A-9: Interconnection Networks/Routing III (Mandalay West) Chair: Dhiraj K. Pradhan (Texas A&M) L: Exact Solutions to Diameter and Routing Problems in PEC Networks - by C. S. Raghavendra, M. A. Sridhar L: Folded Peterson Cube Networks: New Competitors for the Hyper Cube - by Sabine Oehring, Sajal K. Das S: A Unified Structure for Recursive Delta Networks - by P. Navaneethan, L. Jenkins S: Recursive Diagonal Torus: An Interconnection Network for Massively Parallel Computers - by Yulu Yang, H. Amano, H. Shibamura, T. Sueyoshi Session B-9: Potpourri (Martaban) Chair: Bill D. Carroll (UT-Arlington) L: A Processor Allocation Strategy Using Cube Coalescing in Hypercube Multicomputers - by Geunmo Kim, Hyusoo Yoon S: An Efficient Storage Protocol for Distributed Object Oriented Databases- by Min He, Les L. Miller, A. R. Hurson, D. Sheth S: Performance Effects of Synchronization in Parallel Processors - by Roger D. Chamberlain, Mark A. Franklin S: Compiling Distribution Directives in a FORTRAN 90D Compiler - by Z. Bozkus, A. Choudhary, G. Fox, T. Haupt, S. Ranka S: A Proposed Parallel Architecture for Exploring Potential Concurrence at Run-Time - by M. F. Chang, Y. K. Chan Session C-9: Parallel Algorithms (Rangoon) Chair: Farhad Shahrokhi (University of North Texas) L: Fast Rehashing in PRAM Emulations - by J. Keller L: On the Furthest-Distance-First Principle for Data Scattering with Set-Up Time - by Y-D. Lyuu L: Zero-One Sorting on the Mesh - by D. Krizanc and L. Narayanan 12:00 - 1:30 pm: LUNCH 1:30 - 3:00 pm Session A-10: Distributed Systems II (Mandalay West) Chair: Dan Moldovan (SMU) L: Incremental Garbage Collection for Causal Relationship Computation in Distributed Systems - by R. Medina S: STAR: A Fault-Tolerant System for Distributed Applications - by B. Folliot, P. Sens S: Flexible User-Definable Performance of Name Resolution Operation in Distributed File Systems - by Pradeep Kumar Sinha, Mamoru Maekawa S: A Layered Distributed Program Debugger - by Wanlei Zhou S: Distributed Algorithms on Edge Connectivity Problems - by Shi- Nine Yang, M.S. Cheng Session B-10: Partitioning and Mapping II (Martaban) Chair: Sajal Das (University of North Texas) L: Task Assignment on Distributed-Memory Systems with Adaptive Wormhole Routing - by V. Dixit-Radiya, D. Panda L: A Fast and Efficient Strategy for Submesh Allocation in Mesh- Connected Parallel Computers - by Debendra Das Sharma, Dhiraj K. Pradhan L: Scalable and Non-Intrusive Load Sharing in Distributed Heterogeneous Clusters - by Aaron J. Goldberg, Banu Ozden Session C-10: Network Communication (Rangoon) Chair: C.S. Raghavendra (WSU) L: Optimal Communication Algorithms on the Star Graph Interconnection Network - by S. Akl, P. Fragopoulou L: Embedding Between 2-D Meshes of the Same Size - by W. Liang, Q. Hu, X. Shen S: Optimal Information Dissemination in Star and Pancake Networks - by A. Ferreira, P. Berthome, S. Perennes 3:00 - 3:30 pm: BREAK 3:30 - 5:00 pm Session A-11: Applications and Experimental Results III (Mandalay West) Chair: Bertil Folliot (Universite Paris) L: Extended Distributed Genetic Algorithm for Channel Routing - by B. B. Prahalada Rao, R. C. Hansdah S: A Data-Parallel Approach to the Implementation of Weighted Medians Technique on Parallel/Super-computers - by K. P. Lam, Ed. Horne S: Matching Dissimilar Images: Model and Algorithm - by Zhang Tianxu, Lu Weixue S: Parallel Implementations of Exclusion Joins - by Chung-Dak Shum S: Point Visibility of a Simple Polygon on Reconfigurable Mesh - by Hong-Geun Kim and Yoo-Kun Cho Session B-11: Reliability and Fault-Tolerance II (Martaban) Chair: Ben Lee (Oregon State University) L: Adaptive Independent Checkpointing for Reducing Rollback Propagation - by Jian Xu, Robert H. B. Netzer L: Fast Polylog-Time Reconfiguration of Structurally Fault- Tolerant Multiprocessors - by Shantanu Dutt L: Real-Time Distributed Program Reliability Analysis - by Deng- Jyi Chen, Ming-Cheng Sheng, Maw Sheng Session C-11: Interconnection Networks/Routing IV (Rangoon) Chair: Sudha Yalamanchili (Georgia Tech.) L: Scalable Architectures with k-ary n-cube cluster-c organization - by Debashis Basak, Dhabaleswar Panda L: On Partially Dilated Multistage Interconnection Networks with Uniform Traffic and Nonuniform Traffic Spots - by M. Jurczyk, T. Schwederski S: Binary deBruijn Networks for Scalability and I/O Processing - by Barun K. Kar, Dhiraj K. Pradhan S: A Class of Hypercube-Like Networks - by Anirudha S. Vaidya, P. S. Nagendra Rao, S. Ravi Shankar Saturday, December 4, 1993 Tutorial T1: Roles of Optics in Parallel Computing and High-Speed Communications by Ahmed Louri - University of Arizona 8:30 am - 5:00 pm (Martaban) This tutorial will start by examining the state-of-the-art in parallel computing, including parallel processing paradigms, hardware, and software. We will then discuss the basic concepts of optics in computing and communications and the motivations for considering optics and ways in which optics might provide significant enhancements to the computing and communications technologies. The tutorial will include some case studies of optical computing and switching systems. Current research and future applications of optical computing are discussed. Tutorial T2: Functional Programming by Patrick Miller and John Feo - Lawrence Livermore National Laboratory 8:30 am - 5:00 pm (Rangoon) The objective of this tutorial is to familiarize the participants with the current state of functional languages. We will cover both theoretical and practical issues. We will explain the mathematical principals that form the foundation of functional languages, and from which they derive their advantages. We will survey a representative set of existing functional languages and different implementation strategies. We will use the functional language Sisal to expose the participants to the art of functional programming. Tutorial T3: Instruction Scheduling by Barbara Simons and Vivek Sarkar - IBM Corp. 8:30 - 12:00 noon (Nepal) In this tutorial we describe different models of deterministic scheduling, including pipeline scheduling, scheduling with inter- instructional latencies, scheduling VLIW machines, and assigned processor scheduling. In addition, we present an overview of important extensions to the basic block scheduling problem. The program dependence graph, annotated with weights, provides a good representation for global instruction scheduling beyond a basic block. Finally, we describe the close interaction between the problems of instruction scheduling and register allocation. Tutorial T4: Software Systems and Tools for Distributed Programming by Anand Tripathi - University of Minnesota 1:30-5:00 pm (Nepal) This tutorial will present an overview of the most commonly used paradigms and models for distributed computing. This discussion will address interprocess communication models and heterogeneous computing issues. An overview of the object model of computing will be presented in the context of micro-kernel architectures for distributed computing. The programming languages and tools to be discussed here include Parallel Virtual Machine (PVM), P4, Linda, Express, Mentat, Condor, CODE/ROPE, and Orca. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: justin@postoffice.utas.edu.au (Justin Ridge) Subject: Parallelism in graphics apps Organization: University of Tasmania, Australia. Hi folks. A quick question - I have to do some research on the developing use of parallel processors in graphics applications. Can anyone here recommend any articles that are (say) available via ftp or email relating to this topic? Or can you give me any IEEE transactions etc. which deal with specifically with the application of parallel processors to graphics? Even a few simple starters would be most appreciated. Thanks muchly, JR Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fyodor@cs.uiuc.edu (Chris Kuszmaul) Subject: Re: 12 ways Organization: University of Illinois, Dept of Comp Sci, Urbana, IL References: <1993Oct14.131912.9126@hubcap.clemson.edu> In article <1993Oct14.131912.9126@hubcap.clemson.edu> Peter Su writes: >"ENM" == Eugene N Miya writes: > >ENM> Multilate the algorithm used in the parallel implementation to >ENM> match the architecture. > >Where is the line between reasonable amounts of optimization and >'mutilating the algorithm'? > >If I take and implementation of an algorithm that does not vectorize >well (say), and 'mutilate' it into an implementation that does, is >that fair from the standpoint of good benchmarking?... etc.. Changing the algorithm, per se, is not the problem. The real problem is twofold. First, there are examples of people trying to solve sparse linear systems using some slowly convergent algorithm that parallelizes well, but requires more iterations than a more rapidly convergent algorithm that does not parallelize well. They then turn around and quote MFLOP numbers based on the efficiency of the easily parallelized algorithm without admitting that the time to completion is not at all impressive. The second, and more insidious problem is to alter the problem definition itself so that the architecture in use is unfairly favored. For example, if you have a CM5, you might require that the algorithm include random red lights to flash in the computer room. If you have a MP2, you might require that the algorithm generate vicious volleyball players in your devlopment group. :-) :-) CLK Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jsmith@king.mcs.drexel.edu (Justin Smith) Subject: Re: Recursive tree traversal in C* Organization: Drexel University References: <1993Oct13.120321.15631@hubcap.clemson.edu> In article <1993Oct13.120321.15631@hubcap.clemson.edu> jd@viz.cs.unh.edu (Jubin P Dave) writes: >i am trying to implement a radiosity algorithm using C*. i have a BSP tree >which i need to traverse to create a front to back list of polygons. >As C* lacks parallel pointers i use arrays to immplement my tree. > >now my problem is that as function calls are scalar code i cannot use the >usual "where" control structures and call the same routine recursively. >doing so means going into an infinite loop. There is a parallel algorithm for recursive tree-traversal: but it might not be what you want. I describe it in my book Title: The Design and Analysis of Parallel Algorithms Author: Justin R. Smith Publisher: Oxford University Press ISBN: 0-19-507881-0 It is the Euler-Tour technique, and involves creating a linked list that represents the traversal, and then scanning that in parallel. Although this book includes a fair amount of C* source-code, I don't think I wrote a program to do this tree-traversal (I left it as an exercise!). Hope this helps! -- _____________________________________________________________________________ Justin R. Smith Office: (215) 895-2671 Department of Mathematics and Computer Science Home: (215) 446-5271 Drexel University Fax: (215) 895-2070 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cheekong@iss.nus.sg (Chui Chee Kong) Subject: branch and bound Organization: Institute Of Systems Science, NUS will appreciate if someone can email me an answer to the following: (1) What have branch and bound been used for ? I know that example applications include Integer Programming, AI knowledge base searching, travelling saleman problem etc. (2) What are the size of these problems solved using branch and bound in the practical world? (3) If I am to solve the problem using branch and bound on a distributed memory system, what is the size of communication compared to that of computation? Thanks you. chee kong internet: cheekong@iss.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.ai,comp.research.japan,comp.lang.prolog,comp.parallel From: j-sato@goku.icot.or.jp (Motoaki SATO) Subject: IFS Newsletter No.6 (Eng) Reply-To: irpr@icot.or.jp Organization: JIPDEC/ICOT,Tokyo,JAPAN 14, October, 1993 No. 6 ICOT FREE SOFTWARE NEWSLETTER ----------------------------- [Newly released ICOT Free Software] To date, 71 programs have been released as ICOT Free Software. Now, we are releasing a further six programs. These include the experimental version of KLIC -- a KL1 programming environment for UNIX systems, a language processor for a process-oriented high-level programming language running on top of KL1, and several tools for genome information processing. 1) Newly Released Software The following lists the new programs: Symbol Processing: 3 ---------------------------------------------------------------- 72 Portable KL1 processing system: experimental version /ifs/ifsftp/symbolic-proc/unix/pkl1.tar.Z 130871 bytes 73 Process oriented programming language AYA /ifs/ifsftp/symbolic-proc/pimos/aya.tar.Z 235617 bytes 74 KL1 Load Distribution Library /ifs/ifsftp/symbolic-proc/unix/ldlib.tar.Z 123055 bytes Applications programs of parallel logic programming: 3 ---------------------------------------------------------------- 75 Multiple Sequence Alignment by Parallel Iterative Aligner /ifs/ifsftp/exper-apps/pimos/multialign.tar.Z 178138 bytes 76 Intelligent Refiner for Multiple Sequence Alignment /ifs/ifsftp/exper-apps/pimos/editalign.tar.Z 710383 bytes 77 Protein Structure Visualization system: Protein View /ifs/ifsftp/exper-apps/unix/proview.tar.Z 997633 bytes 2) Detailed Description 1. Portable KL1 processing system: experimental version The first and experimental version of the KLIC system for investigating execution method of KL1 programs on UNIX systems has been released. KLIC translates KL1 programs into equivalent C programs, which are compiled and linked with the KLIC runtime library to obtain an executable. This version runs only sequentially, but parallel versions will be available eventually. The compiler of this version is written in Prolog. An Edinburgh Prolog compatible and a C language system are needed for to use it. With Prolog and C language systems, the KLIC system can run on any computer, including personal computers. Future releases of the compiler will be written in KL1 make the Prolog system unnecessary. Since this version is developed to investigate execution method of KL1 programs on UNIX systems, there are no convenient facilities such as debugger, etc. A version of KLIC with these facilities will be released soon. 2. Process oriented programming language AYA AYA is designed upon KL1. Using AYA, it is easier to write and read parallel logic programs than it would be when using KL1. Furthermore, it is easy to avoid and fix AYA programs bugs. Message communications between processes can be expressed directly in AYA programs. Processes can have several states. Input and output process modes are also introduced. Variables and streams for communication between processes can be terminated automatically. 3. KL1 Load Distribution Library The load distribution library is a set of utilities that implement typical load distribution schemes such as network generation, process mapping, and dynamic load distribution for KL1 programming. The library provides templates for typical distribution schemes. A parallel program for solving a given problem is created by linking this library with problem-specific code written by the user. 4. Multiple sequence alignment by Parallel Iterative Aligner This program solves multiple alignment problems that align similar parts of protein sequences. The parallel iterative aligner, a method of repeating partiall improvement by using dynamic programming, is implemented. Put concretely, possible partial improvement is done on each processor in parallel, with the best result being selected. This is a sort of hill-climbing method, the program being written in KL1. This program solves large scale multiple alignment problems with high quality on PIM. For example, aligning 20 sequences,each of which consists of 80 characters requires about 10 minutes on PIM using 256 processors. The result of this program has a confidence high enough to allow the investigation of biological considerations. Analyzing the behavior of this program will provide you with valuable knowledge on parallel information processing and parallel load distribution. 5. Intelligent Refiner for Multiple Sequence Alignment This program refines a protein sequence aligned by Multiple Sequence Alignment with biological knowledge. This program has an interface to obtain knowledge such as the plan a biologist uses for refining protein sequences. This program is providfed with automatic aligner with constraints, that acts on a part of protein sequences designated by the user, using the mouse interface. The automatic aligner with constraints aligns protein sequences, keeping part of the sequences fixed by user. This program is written in C with Xlib and OSF/Motif, making it easy for users to handle. The alignment of part of protein sequence is processed by KL1 program, with the result of the alignment appearing on the display. This program has an alignment editor that can insert and delete gaps, as well as search for a motif by means of the mouse interface. 6. Protein Structure Visualization System: Protein View This system displays the three-dimensional structure of a protein on a graphical work station. This system consists of tow parts, Pro-View and 3D-View. Pro-View reads protein data from PDB(Protein Data Bank) and generates graphical object description language 3D-talk. 3D-View is a three-dimensional general purpose visualization tool that interprets 3D-Talk. 3D-View can visualize not only protein structures, but also robot structures and flight simulation. It also lets you animate these objects. 3D-Talk is a object-oriented language. It is easy for beginners to understand because its grammar is similar to that of English. Pro-View is a tool for analyzing protein structures. It features a Pro-Talk, an extension of 3D-Talk. This system provides a powerful means of visualizing processes such as protein folding simulation. The ICOT Free Software Catalog-II has recently been published. Its contents explain these new programs that have released as ICOT Free Software. Anyone wishing to obtain this catalog is invited to contact the IFS desk via e-mail, mail or fax at the address at the end of this newsletter. [User's Group] Anyone interested in organizing a user's group to carry out research on any particular item of ICOT Free Software, for the purposes of revision or improvement is invited to contact the IFS-desk via e-mail at the address given at the end of this newsletter. We hope to feature some of your proposals in the next issue of the newsletter. Reactions to your proposals shall also be forwarded to you. Revised programs can be stored on the FTP server at ICOT, if you feel that your revision would be useful for other users. [About Common ESP] Some programs in ICOT Free Software can be executed under Common ESP (CESP). CESP is not part of the ICOT Free Software, instead being available from AI Language Research Institute (AIR). The address of the AIR is as follows. To contact AIR with questions related to Common ESP, use the following address. Research Management Department AI Language Research Institute, Ltd. c/o Computer & Information Systems Laboratory Mitsubishi Electric Corporation 1-1, Ofuna 5-chome, Kamakura Kanagawa 247 Japan e-mail: cesp-request@air.co.jp FAX: +81-467-48-4847 [Contacts] For information on IFS, access ifs@icot.or.jp by e-mail. If you receive a paper edition of this newsletter, let us know your e-mail address and we shall send you the electronic edition. If you do not have an e-mail facility, contact the address below. All available IFS is listed in "ICOT Free Software Catalog" and "ICOT Free Software Catalogue II" . If you do not have a copy of either catalog, supply the IFS-desk with your postal address and we shall arrange to send a copy to you. If any of your colleagues or acquaintances are interested in IFS, let us know their name and both their e-mail and postal addresses, and we shall arrange to send them both the newsletter and catalog. ICOT Free Software desk Institute for New Generation Computer Technology 21st Floor, Mita Kokusai Bldg. 4-28, Mita 1-chome Minato-ku, Tokyo 108 Japan FAX: +81-3-3456-1618 -- Institute for New Generation Computer email:j-sato@icot.or.jp Technology (ICOT) 2nd lab/IR&PR-G tel:+81-3-3456-3192 Motoaki SATO fax:+81-3-3456-1618 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ychang@cs.tamu.edu (Yeimkuan Chang) Subject: Question on Mesh? Organization: Texas A&M Computer Science Department, College Station, TX Hello netters, I like to do some research on mesh architecture. There are many literatures compare the performance on different topologies. However, what I want to know is the performance difference between different types of meshes with the same number of nodes, e.g. between 8x8x4 and 16x16. I would appreciate any pointers or references concerning this. General performance analysis or application oriented comparison are welcome. Thanks in advance Yeimkuan Chang -------------------------------------------------------------------- Department of Computer Science Office: 514A H.R.Bright Building Texas A&M University Phone: (409) 845-5007 College Station, TX 77843-3112 E-Mail: ychang@cs.tamu.edu -------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: "Carrie J. Brownhill" Subject: Fast MIMD machines - pointers wanted Newsgroups: comp.parallel Date: 19 Oct 93 18:59:25 GMT Greetings, I'm looking for information and pointers on shared memory MIMD machines with fast synchronization. I'm working on small granularity parallelization and want to find out how small a granularity is currently feasible on real machines. I'd also really appreciate any pointers to available time or simulators for fast shared memory MIMD machines. Thanks very much, Carrie J. Brownhill Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jgarcia@cse.ucsc.edu (Jorge Garcia) Subject: Info on PABLO wanted Date: 19 Oct 1993 23:56:44 GMT Organization: University of California, Santa Cruz (CE/CIS Boards) I'm looking for any information available on the PABLO system, from the University of Illinois (I think). Does anyone know where I can find technical articles about it, documentation, or any other source of information? Please reply directly to me: jgarcia@cse.ucsc.edu Thanks in advance, Jorge Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Greg.Wilson@cs.anu.edu.au (Greg Wilson (EXP 31 dec 93)) Subject: terminology question Organization: Australian National University I have always used the term "star" to refer to a topology in which every processor is connected to every other; however, I am told that the term is also used for topologies in which processors 1..N are connected to a distinguished central processor 0. Assuming that the latter definition is more common, is there a term for topologies of the former type? Thanks, Greg Wilson Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: N MacDonald Subject: Workshop on Cluster Computing Organization: Department of Computer Science, University of Edinburgh JISC/NTSC Workshop on Cluster Computing The University of Edinburgh Tuesday 2nd November 1993 Workstation and machine clusters are seen by many to be a major development in the evolution and application of parallel environments. In order to inform, and to discuss the issues and prospects for applications of clusters within Higher Education, the University of Edinburgh has organised a one-day workshop on behalf of the New Technologies Subcommittee of the Joint Information Systems Committee of the UK Higher Education Funding Councils. Papers from leading practitioners in the field from the UK, Europe and the USA, will present the state of the art in using clusters in a variety of different scenarios, and highlight the challenges and problems which arise. Significant opportunities for discussion and exchanges of views will be included. It is hoped that the major workstation vendors will be represented at the workshop. Attendance will be encouraged from the largest number of individual Higher Education Institutions. No registration fee will be charged, although it is essential that delegates register with the workshop organisers AS SOON AS POSSIBLE. Contact details are provided below (See "FURTHER INFORMATION"). *************************************************************************** ********** WORKSHOP VENUE ************************************************* *************************************************************************** The workshop will be held in the John McIntyre Centre at the University of Edinburgh's Pollock Halls of Residence. A map and directions will be sent out to you after registration. The workshop will commence at 0930 and close at 1630, in order to accommodate arrival in Edinburgh on the morning and departure in the early evening. A list of hotel and guest house accommodation is available from the workshop organisers (See "FURTHER INFORMATION"). Venue: John McIntyre Centre Pollock Halls of Residence The University of Edinburgh St Leonard's Hall 18 Holyrood Park Road Edinburgh EH16 5AY *************************************************************************** ********** PROVISIONAL PROGRAMME ****************************************** *************************************************************************** 0830 ARRIVAL AND REGISTRATION 0930 WELCOME Dr R. D. Kenway, Director, Edinburgh Parallel Computing Centre, The University of Edinburgh, UK 0935 INTRODUCTION Dr A. E. Clementson, Chairman, JISC New Technologies Subcommittee 0945 SUPPORTING DISTRIBUTED COMPUTING IN A SCIENTIFIC AND ENGINEERING ENVIRONMENT Professor R. J. Hynds, Professor of Computing Management, Imperial College of Science and Technology, UK In 1991 the Centre for Computing Services at Imperial College started to distribute centrally funded computing resources to the College's science and engineering departments. By the end of 1992 all of these departments had clusters of client-server workstations (at least 10 workstations and a file server for the small departments, proportionally more for the larger departments) linked by a fibre optic, FDDI, campus backbone. The talk will be concerned with the ways in which departments have utilised these systems in their teaching and research, and the technical management problems that have arisen. Consideration will be given to the managerial and technical problems of implementing a software package capable of running a workstation cluster in 'parallel computing mode'. R.J. Hynds is a physicist, who spent 20 years in space research activities at Imperial College and still has a research interest in the Ulysses space craft project. He was appointed Head of the Centre in 1980, and assumed full time responsibility in 1983. He was made Professor of Computing Management in 1988. His interests are in the problems of managing change in computer service environments. 1015 Discussion 1030 EXPLOITING NETWORKED COMPUTERS FOR HIGH PERFORMANCE COMPUTING N. B. MacDonald, Edinburgh Parallel Computing Centre, The University of Edinburgh, UK Clustered or networked computer systems can be exploited for high performance computing in a variety of ways. This talk will present an overview of the spectrum of possibilities, from dedicated homogeneous batch servers executing a set of sequential tasks, to heterogeneous environments supporting a mixture of both sequential and parallel interactive and batch tasks. Particular consideration will be given to the issues which arise for the user and the system manager in each scenario. Neil MacDonald studied Computer Science and Artificial Intelligence at the University of Edinburgh. Since 1990 he has been a technical consultant on the staff of Edinburgh Parallel Computing Centre. In this role he has been involved in research, training and consultancy on the use of parallel computing platforms in a range of applications drawn from both academia and industry. His research interests focus on evaluating, modelling and predicting the performance of parallel systems, and he is currently preparing a PhD thesis on this topic. 1100 Discussion 1115 COFFEE --------------------------------------------------------------- 1145 BATCH AND PARALLEL PROCESSING ON WORKSTATION CLUSTERS Prof. Dr. W. Gentzsch, Genias Centre for Numerically Intensive Applications, Germany The replacement of mainframes by workstations and workstation clusters is nowadays discussed as a hot topic in research and industry. This directly leads to the use of these clusters as batch environments which is primarily enabled by new software, such as e.g. CODINE (COmputing in DIstributed Memory Environments). A second approach to a more efficient use and load balancing of workstations is to cluster them into a parallel environment. The software tools EXPRESS and PVM provide a professional parallel platform for workstation clusters, and FORGE 90 allows semi-automatic and interactive parallelizations of existing computer programs. In this contribution, the software tools CODINE, EXPRESS, PVM and FORGE 90 will be presented and their use for batch and parallel workstation clusters will be discussed with the aid of several examples. Wolfgang Gentzsch studied mathematics and physics at TU Aachen, writing a doctoral thesis in ``Numerics in Nonlinear Elasticity''. After research at TU Darmstadt, the Max-Planck Institute Garching and the German Aerospace and Aeronautics Establishment (DLR), he was appointed head of the computational fluid dynamics department at DLR. Since 1985 he has been professor of applied mathematics and computer science at FH Regensburg. He founded Center for Numerically Intensive Applications and Supercomputing GENIAS Software GmbH in 1990, and GENIAS Parallel Computing GmbH in 1992 as European distribution, training and support centers for software tools and services for parallel computers and workstation clusters. 1215 Discussion 1230 LUNCH ---------------------------------------------------------------- 1330 GLOBAL SCIENTIFIC COMPUTING VIA A FLOCK OF CONDORS Dr M. Livny, University of Wisconsin-Madison, USA In recent years we have experienced a dramatic increase in the processing capacity owned by the scientific community. There is a powerful workstation on the desk of almost every experimental scientist, farms of Unix boxes are assigned to the exclusive usage of small groups of researchers, departments own multiprocessors with tens or even hundreds of processing nodes, and many institutes have acquired supercomputers. Due to fluctuations in the processing demands of the owners of these resources, most of them are underutilized. At the same time, however, many researchers experience very long waiting times for their scientific computations. The scientific community suffers from a "wait-while-idle" problem that leads to a huge gap between the computing capacity that it owns and the actual capacity experienced by an individual scientist. Too often does a scientific computation wait in a queue while a computing resource that is capable of serving it is idle. Since 1984 we have been engaged in a effort to develop a system that will route scientific computations to unutilized computing resources in a cluster of privately owned Unix workstations. We view the Condor system that was built as a result of this effort as a first step in the direction of solving the wait-while-idle problem for scientific computations. For more than four years Condor has been operational in our department. It currently controls a cluster of more than 250 workstations and is used by more than 30 researchers. In the last two years Condor has been also installed in a wide range of scientific environments. The existence of a worldwide flock of Condors has motivated us to move to a new phase in our research and to address the wait-while-idle problem at an inter- cluster level. Assuming that each cluster of workstations is controlled by a Condor system, we are currently in the process of developing policies and mechanisms to route jobs across Condor systems. The talk will start with an overview of why and how we have addressed the problem of scheduling scientific computations in an environment of privately owned resources. A description of the Condor system and a summary of the experience we have gained from using it in academic and industrial settings will follow. The talk will conclude with an outline of our current and future research activities that address the wait-while-idle problem from the prospective of very large clusters of heterogeneous computing resources. Miron Livny received the B.S. degree in Physics and Mathematics in 1975 from the Hebrew University and the M.Sc. and Ph.D. degrees in Computer Science from the Weizmann Institute of Science in 1978 and 1984, respectively. Since 1983 he has been on the Computer Sciences Department faculty at the University of Wisconsin-Madison, where he is currently an Associate Professor. Dr. Livny's research focuses on scheduling policies for processing and data management systems and on tools that can be used to evaluate such policies. His recent work includes Real-Time DBMSs, Client Server systems, batch processing, and tools for experiment management. 1345 Discussion 1415 HIGH-PERFORMANCE CLUSTERS IN A SUPERCOMPUTING ENVIRONMENT Dr R. L. Pennington, Pittsburgh Supercomputer Centre, USA The Pittsburgh Supercomputing Center has been exploring and exploiting the technologies associated with high-performance workstation cluster computing and has created the "SuperCluster", a workstation cluster that has become an integral part of the PSC supercomputing environment. This goal of this project was to develop a cluster that is a manageable, parallel, scalable, secure and incrementally allocatable supercomputing class resource that is capable of supporting serial and parallel development and production work. This was achieved by integrating and extending emerging software technologies in the fields of parallel software systems and tools, distributed file systems, and resource management. Scientifically significant application programs are now available on the SuperCluster and it is also serving as a development and testing environment for work in heterogeneous supercomputing at the PSC. Robert L. Pennington received his Ph.D. in Astronomy from Rice University in Houston, Texas, in 1984 with research based on digital image processing techniques. At the University of Minnesota, he led the development team that created a computerized, modern high-speed scanning microdensitometry system for astronomical research that used a cluster of Sun and SGI workstations to perform the data reduction for the real-time data collection system. Currently, he is the project leader for the SuperCluster heterogeneous workstation cluster at the Pittsburgh Supercomputing Center, and the Workstation Cluster Software Group Leader for the Metacenter, a collaboration of the NSF supercomputing centers. 1445 Discussion 1500 COFFEE --------------------------------------------------------------- 1530 PANEL DISCUSSION Chaired by Professor A. J. G. Hey, University of Southampton 1625 CLOSING REMARKS Dr A. E. Clementson, Chairman, JISC New Technologies Subcommittee 1630 CLOSE *************************************************************************** ********** FURTHER INFORMATION AND REGISTRATION *************************** *************************************************************************** Requests for further information and registration should be sent to: JISC Cluster Workshop Edinburgh Parallel Computing Centre The University of Edinburgh James Clerk Maxwell Building Mayfield Road Edinburgh EH9 3JZ Telephone: 031 650 5030 Fax: 031 650 6555 Email: jisc-workshop@epcc.ed.ac.uk *************************************************************************** ********** REGISTRATION FORM ********************************************* *************************************************************************** Name:______________________________________________________________________ Affiliation:_______________________________________________________________ Position:__________________________________________________________________ Postal address: ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ ___________________________________________________________ Postcode: ______________________ Telephone: ______________________ Fax: ______________________ Email address: ______________________ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ken Thomas Subject: Comett Course Date: 20 Oct 1993 11:35:58 +0100 Organization: Electronics and Computer Science, University of Southampton Appended are details of Comett course to be given at Soultz, France Nov 2-5, 1993 -- Dr Ken Thomas Department of Electronics and Computer Science University of Southampton Southampton S09 5NH United Kingdom Telephone : 0703-592170 Fax : 0703-593045 Email : kst@uk.ac.soton.ecs Applications for High Performance Computers Soultz, France Date: November 2nd, 3rd, 4th and 5th, 1993 Nov 2nd-5th, 1993 The aim of this course is to understand some aspects of current applications of high performance computers. There are three main objectives: 1. To give an overview of parallel hardware and software and to explore the role of performance critical parameters. Matrix kernels are also explored. 2. To give awareness of the tools that are likely to be important in the future. This includes HPF (High performance Fortran) and the message passing standards. 3. To put together applications in diverse areas of science and engineering. There are speakers on seismic modelling, CFD, Structural Analysis, Molecular dynamics and climate modelling. Programme. Day 1 14.00 Start Introduction and Welcome Session 1 Overview Introduction to Parallel Hardware Introduction to Parallel Software Panel Discussion Day 2 Start 09.30 Session 2 Performance Characterization Low-level Benchmarks and Performance Critical Parameters CFD Session 3 Applications I Seismic Modelling Climate Modelling Panel Discussion Day 3 Start 9.30 Session 4 HPC Standards HPF Message-Passing Interface Session 5 Parallel Matrix Kernels Structural Analysis Panel Discussion Day 4 Start 09.00 Session 6 The Parkbench Initiative Grand Challenge Applications Panel Discussion. Close 12.15 Cost 375 pounds sterling (Full Rate) 275 pounds sterling for academic participants and members of ACT costs include lunch and refreshments throughout the day. Minimum numbers 10 This course cannot be given unless there is a minimum of 10 participants. It will be necessary to receive the your registration no later than Monday 25th October, 1993. Should the course not run, then all registration fees will be returned. Applications for High Performance Computers Soultz, France Date: November 2nd, 3rd, 4th and 5th, 1993 Applications for High Performance Computing Registration Form Title . . . . . . . . . . . . . . . . . Surname . . . . . . . . . . . . . . . . First Name . . . . . . . . . . . . . . . Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tel: . . . . . . . . . . . . . . . .. Fax: . . . . . . . . . . . . . . . . . I enclose a cheque in the sum of . . . . . . . . . . . . . . . . . . Made Payable to a"University of Southampton". Please forward cheque and registration to Telmat Informatique. Venue: Telmat Informatique Z.1. - 6 Rue de l'industrie, B P 12 68360 Soultz Cedex France Local Accommodation Arrangements contact: Rene Pathenay/Francoise Scheirrer Telmat Informatique Tel: 33 89 765110 Fax: 33 89 742734 Email: pathenay@telmat.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: Help: Hypercubes in Fat Trees From: Shigeru Ishimoto Dear Grouper, I am looking for the paper on embedding hypercube in fat tree. Could anyone give information. Thanks, ----- _____ | A I S T Shigeru Ishimoto (ishimoto@jaist.ac.jp) | HOKURIKU 18-1 Asahidai Tatsunokuchichou Nomigun Ishikawaken Japan o_/ 1 9 9 0 Japan Advanced Institute of Science and Technology,Hokuriku Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cfreese@super.org (Craig F. Reese) Subject: Re: Help---references to CORDIC algorithm Organization: Supercomputing Research Center (Bowie, MD) References: <1993Oct14.171601.3726@hubcap.clemson.edu> In article <1993Oct14.171601.3726@hubcap.clemson.edu> Shigeru Ishimoto writes: >Dear Grouper, > >I am looking for the paper on CORDIC algorithm which discovered in the >period 1960-1970. The algorithm was discovered again by Dr. Richard Feynman >in 1980's. >Could anyone give me information. > Yes. One of my favorites is: Walther, J. S. "A unified algorithm for elementary functions" Spring Joint Computer Conference 1971 You might also look for: Volder, J. E. "The CORDIC trigonometric computing technique" IRC Transactions on Electronic Computers Vol EC-8 No 3 pp 330-334 September 1959. CORDIC stuff tends to pop up often in VLSI/DSP conferences. Sorry but I don't have any references handy. Craig *** The opinions expressed are my own and do not necessarily reflect *** those of any other land dwelling mammals.... "The problem ain't what we don't know; it's what we know that just ain't so Either we take familiar things so much for granted that we never think about how they originated, or we "know" too much about them to investigate closely." ----------------- Craig F. Reese Email: cfreese@super.org Supercomputing Research Center 17100 Science Dr. Bowie, MD 20715-4300 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Douglas Hanley Subject: Paralation Model of Parallel Computation Keywords: Paralation C, MIMD Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh Apparently-To: comp-parallel@uknet.ac.uk I was wondering if anybody has had any experience of Gary Sabot's Paralation Model of Parallel Computation. I am currently attempting to implement the models three operators (Match, Move and Elwise) and Field data structure across a MIMD architecture so as to investigate its performance. I am going to use C as a base language for the model. I know that in 1988 Sabot was experimenting with Paralation C. What I need to know is this; was this language ever completed, if so upon what architectures (SIMD/MIMD), if so where can I get information upon it. Absolutely any information regarding peoples experience of the Paralation Model (its usage and/or implementation) would also be useful. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: schooler@apollo.hp.com (Richard Schooler) Subject: Cray T3D cache coherence Sender: usenet@apollo.hp.com (Usenet News) (I'm not sure this went out the first time...) I've been reading the Cray T3D Architecture Overview, which describes the machine as a physically distributed, logically shared memory machine (NUMA). There's no mention of cache coherence, though. As far as I can tell, the software must handle the caches by using a memory barrier after writes, and non-cached reads. Is this correct? -- Richard Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel To: comp-parallel@uunet.UU.NET Path: emf From: emf@freedom.NMSU.Edu (Xyanthilous Harrierstick) Newsgroups: comp.parallel Subject: Intel iPSC/d5 Hypercube info requested... Date: 20 Oct 1993 15:38:02 GMT Organization: Student Computing Facility, New Mexico State Univ., Las Cruces Nntp-Posting-Host: freedom.nmsu.edu Hi. Our student computing group here at NMSU has been donated an Intel iPSC Hypercube (actually two of the big silver boxes. 64 processors altogether) driven by an Intel 286 based System 310. the hypercube itself is of the d5 configuration. (lowest on the list, according to the manual) We would like to use this machine to do some ray tracing and maybe some cellular automata experiments. How does the performance of one of these systems compare with doing the same task on a Sun4/670 ? I'm curious to know if it's really worth the effort to get this system functional and then recode all our software . (I know next to nothing about parallel processed machines) thanks for your time. Erik -- Erik "Xyanthilous" Fichtner efichtne@dante.nmsu.edu Physics and Astronomy emf@freedom.nmsu.edu, or techs@wyvern.ankle.com "Whattya mean I ain't kind? Just not _YOUR_ kind!" - Megadeth Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: elammari@scs.carleton.ca (M. Elammari) Subject: Finding smallest value on a hypercube Organization: Carleton University I posted this article few days ago, but it didn't show up on our system. So here is another try. ------------------------- I am lookin for parallel algorithms for finding the smallest value on Hypercubes. Any help/references will be appreciated. Thank you in advance. -- M. Elammari Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: soc.college.gradinfo,comp.edu,comp.ai,comp.ai.genetic,comp.ai.neural-nets,comp.software-eng,comp.parallel From: rro@CS.ColoState.EDU (Rod Oldehoeft) Subject: Ph.D. fellowships in CS (esp. AI, PP, SE) Organization: Colorado State University PH.D. FELLOWSHIPS AVAILABLE SPRING, 1994 SEMESTER COMPUTER SCIENCE DEPARTMENT COLORADO STATE UNIVERSITY Fellowship support for graduate study leading to the Ph.D. in Computer Science is available at Colorado State University. These fellowships were recently awarded by the U.S. Department of Education through the Patricia Roberts Harris Fellowship Program, AND MUST BE FILLED FOR THE JANUARY, 1994 SEMESTER! The CS Department at Colorado State has enjoyed significant growth in artificial intelligence, parallel and distributed computing, and software engineering. Research funding has shown steady increases for several years, and proposals for research infrastructure are pending. A description of the research faculty, programs, facilities, and current research projects is attached below. Description of the fellowships: 1. The stipend is the same as that for NSF Fellowships ($14,000 in the first fiscal year). A demonstrated need is required to receive the full amount. Additional funding supports tuition, fees, as well as student projects. 2. The duration of the fellowship is up to five years; in one of the years each winning fellow will engage in teaching duties to satisfy Departmental degree requirements. Qualifications for the award: 1. You must be admitted to the Ph.D. program in Computer Science at Colorado State University by the beginning of the Spring, 1994 semester. This means, at a minimum, that you have an earned baccalaureate degree in an appropriate discipline by the beginning of the semester, and you satisfy other requirements for admission. 2. Spring, 1994 must be your first or second semester of graduate work. 3. Because of the prestigious nature of these fellowships, you should be an individual whose background makes you competitive for other national fellowships (NSF or NPSC, for example). 4. You must be a woman or a member of a minority group, or both. 5. You must be a citizen or permanent resident of the United States. TIMING IS EVERYTHING: These awards must be filled for the Spring, 1994 semester. If you are qualified for an award, and find Colorado State an attractive possibility for your graduate study, please ask for application materials from gradinfo@cs.colostate.edu. Please mention this posting. Research Programs, 1993--94 Computer Science Department Colorado State University FACULTY Charles Anderson, Ph.D., University of Massachusetts; Assistant professor; Neural networks for control and signal processing, reinforcement learning, pattern classification, artificial intelligence, graphics. J. Ross Beveridge, Ph.D., University of Massachusetts; Assistant professor; Computer vision, robot navigation, model matching, visual feature extraction, software environments. James Bieman, Ph.D, University of Southwestern Louisiana; Associate professor; Software engineering, automated testing and analysis, metrics and reuse, programming languages. A.P. Wim Bohm, Ph.D., University of Utrecht; Associate professor; Declarative programming languages, algorithms design for declarative programming languages, multithreaded architectrues. Karl Durre, Ph.D., Technical University of Hannover; Associate professor; Algorithms and data structures, human-computer interaction, interfaces for the blind. Michael E. Goss, Ph.D., University of Texas at Dallas; Assistant professor; Graphics, terrain visualization and geographic applications, volume visualization, visual simulation, fractals, digital signal processing. Dale Grit, Ph.D., University of Minnesota; Associate professor; Parallel functional languages and architectures, operating systems. Adele Howe, Ph.D., University of Massachusetts; Assistant professor; Artificial intelligence, evaluating AI systems, planning, agent arch- itectures, failure recovery. Robert Kelman, Ph.D., University of California at Berkeley; Professor, Editor for Computers and Mathematics with Applications and Rocky Mountain Journal of Mathematics; Computational methods, mathematical software. Yashwant Malaiya, Ph.D., Utah State University; Professor, Fault-tolerant computing, software reliability management, fault modeling, testing, hardware/software reliability evaluation, testable design. Walid Najjar, Ph.D., University of Southern California; Assistant professor; Computer architecture, Parallel processing and architectures, perform- ance and reliability evaluation, parallel simulation. Rodney Oldehoeft, Ph.D., Purdue University; Professor and Chairman; Parallel processing software and systems, functional programming, operating systems. Kurt Olender, Ph.D., University of Colorado; Assistant professor; Software engineering, development environments, software analysis and evaluation tools, programming languages. Pradip Srimani, Ph.D., University of Calcutta; Professor; Parallel and distributed computing, operating systems, graph theory applications. Anneliese von Mayrhauser, Ph.D., Duke University; Associate professor; Software engineering, maintenance, metrics, testing, reliability, performance evaluation. Alan Wendt, Ph.D., University of Arizona; Assistant professor; Languages and compilers, automated code generators and optimizers. Darrell Whitley, Ph.D., Southern Illinois University; Associate professor; Artificial intelligence, machine learning, genetic algorithms, neural networks. Anura Jayasumana, Ph.D., Michigan State University; Affiliate faculty; Networks, VLSI. Julian Kateley, Ph.D., Michigan State University; Affiliate faculty; Computer systems evaluation, computer center management. Frederick Kitson, Ph.D., University of Colorado; Affiliate faculty; Graphics algorithms and architectures, scientific visualization, parallel algorithms for geometric modeling. Michael Molloy, Ph.D., University of California, Los Angeles; Affiliate faculty; Analytic modeling, stochastic petri nets, networks. Jack Walicki, Ph.D., Marquette University; Affiliate faculty; Architectures and algorithms for parallel processing and signal processing, microprogramming. DEGREE PROGRAMS AND AREAS OF STUDY The Computer Science Department at Colorado State University offers programs of study leading to Bachelor of Science, Master of Science, and Doctor of Philosophy degrees in Computer Science. Possible areas of emphasis for graduate studies include: Algorithms Applicative Languages Architecture Artificial Intelligence Computational Methods Computer Vision Distributed Systems Fault-Tolerant Computing Genetic Algorithms Graphics Languages and Compilers Neural Networks Operating Systems Parallel Processing Performance Evaluation Software Engineering COMPUTING FACILITIES The Computer Science Department maintains several distinct laboratories: Architecture studies: an HP9000/S400 server and five 400t stations; Artificial intelligence/neural nets: five IBM RS6000/320H stations and five X terminals; Graphics: an HP9000/720 workstation, a 433VRX system, two 375SRX's and five 340 workstations; Networks and distributed computing: an ATT 80386 server and 18 80386SX stations; Software engineering: three HP9000/710 workstations, an S400 server and seventeen M400 workstations. An HP9000/735 and numerous HP, Sun, and DEC servers and workstations provide general purpose computing. The Department houses a Motorola Monsoon Data-Flow machine, and a 16-cpu Sequent Balance 21000 multi- processor system for parallel processing research. All the preceding run Unix and are fully networked. Researchers have access to other multipro- cessor systems at remote locations. In addition, the Department has a vari- ety of X terminals and microcomputers for instructional and research purposes. Several microprogrammable systems from AMD and TI are used for instructional purposes. Laser printers are available. Academic Computing and Networking Services maintains several IBM RS6000 servers running AIX, and a Computer Visualization Laboratory. The campus backbone network connects many sites and provides access to the Internet. CURRENT RESEARCH PROJECTS Learning Algorithms for Neural Networks: Neural networks are a highly parallel mode of processing with very simple computing elements that perform a ``sub-symbolic'' form of computation and representation. These nets learn a set of weighted connections among nodes that map an input pattern onto an output pattern. In this project, we are investigating ways of improving the learning efficiency of neural networks. Our approach involves novel architectures and methods for learning new internal representations. Investigators: Charles Anderson and Darrell Whitley. Neural Networks for Control: Current design techniques for automatic controllers require knowledge about the system to be controlled. Often this knowledge is not available. We investigate using neural networks to improve the performance of controllers by learning from on-line experience. We focus on increasing the efficiency of reinforcement learning algorithms. Investigator: Charles Anderson. Sponsors: National Science Foundation; American Gas Association; CSU. EEG Recognition with Neural Networks: We are studying the feasibility of human-computer interaction through the use of on-line recognition of electroencephalogram (EEG) patterns. Our approach is to develop new algorithms for developing internal representations of EEG signals. Investigator: Charles Anderson. Sponsor: National Science Foundation. Computer Vision: Recognition of objects by shape is an essential problem within the field of computer vision. This project will extend a class of probabilistic algorithms for determining whether an object is visible in a scene, and if so, where precisely the object is relative to the camera. This project will also investigate parallel implementations and formal ways of characterizing algorithm performance. Potential application domains include robot navigation, photo-interpretation and automated manufacturing. Investigator: J. Ross Beveridge. Automated Software Testing: One mechanism for finding faults in software is through the use of executable assertions. Executable assertions are rarely used in practice, due, in part, to the actual and preceived over- head of defining and embedding the assertions in code. One way to expand the use of executable assertions is with tools that help developers manage assertion definition and placement. We evaluate possible alter- natives for such tools in context with the design of our own assertion tool, C-patrol. C-Patrol allows a developer to reference a set of previously defined assertions, written in virtual C, bind assertion parameters, and direct the placement of the assertions by a pre-processor. We are now evaluating the use of C-Patrol for executable specifications and oracles, debugging, and test script generation. Investigator: James Bieman. Sponsors: Colorado Advanced Software Institute. Collaborator: Storage Technology Corp., Micro-Motion, Inc. Reuse Metrics for Object Oriented Software: We are identifying a set of measurable reuse attributes appropriate to object oriented and object based systems, deriving a suite of metrics which quantify these attributes, and designing prototype tools to take these measurements. Our prototype tools can evaluate reuse in both Ada and C++ software. We are collecting software data for empirical studies of software reuse; we are also investigating the relationship between reuse and structural properties of software including coupling and cohesion; we are exploring techniques for graphical visualization of software reuse. Investigator: James Bieman. Sponsors: NASA Langley Research Center. Measurement and Analysis of Software Functional Cohesion: We are developing measures of functional cohesion following the approach used in the physical sciences to develop measurements. We define abstract models of functional cohesion and use these models to identify cohesion attributes and define measures of these attributes. We are examining the orderings that these measures impart on software modules. We plan to develop cohesion metric tools and conduct empirical studies of cohesion in real world software. Investigator: James Bieman. Automatic Analysis from Algebraic Specifications: We seek methods for using algebraic specifications to automatically analyze software implementations. One approach is to convert algebraic axioms into sequencing specifications, which are used to statically analyze programs using the Cecil/Cesar system to determine whether specifications are satisfied by the implementation. Investigators: James Bieman and Kurt Olender. Applicative Language Features: We investigate the requirements of scientific programming versus the features offered by current applicative programming languages, and compare these to conventional programming languages. A first comparative language study involves Fortran, SISAL and Id. We focus on the implementation of numerical algorithms and study language features, their expressiveness and efficiency. Investigator: Wim Bohm. Sponsor: Los Alamos National Laboratories. Compilation of Functional Languages for Hybrid von Neumann-Dataflow Machines: The research here is twofold. On the higher level we apply data dependence analysis techniques to determine uniqueness, completeness, and order of evaluation of array definitions. On the lower level we study mappings from IF2, a block-structured data dependence form with explicit memory management, to machine independent dataflow code. We map this code on paper as weel as real multithreaded machines. Investigators: Wim Bohm and Walid Najjar. Sponsor: National Science Foundation. VISA: A Virtual Shared Addressing System for Distributed Memory Multiprocessors: We study a run-time system for distributed memory multiprocessors, that provides a single address space and latency tolerance, and is machine and language independent. Data distribution is supported by mapping functions, that attempt to provide latency avoidance by tying loop threads to the array segments they work on. We target VISA to be the backend for the standard Sisal compiler. Moreover, we use VISA as a stand alone run- time system that provides a shared memory abstraction for distributed C programs. Investigators: Wim Bohm and Rod Oldehoeft. Sponsor: Sandia National Laboratories. Numerical Algorithms in Id: We study the parallel complexity of numerical library software, written in Id, a dataflow language designed at MIT, and executing on the Motorola/MIT Monsoon machines and, in a later stage of the research, on the Motorola/MIT *T machines. Investigators: Wim Bohm and Bob Hiromoto (LANL). Compression of Static Dictionaries: We are developing new methods for compactly storing the keywords of static dictionaries where changing the contents by deletion and insertion does not occur. The storage representation is based on a compressed trie structure featuring constant access time, easy cross-referencing and less than half the storage need of hash methods. Investigator: Karl Durre. Interactive Computer Access for the Blind: We have developed innovative interaction techniques for non-visual computer access. Based on this research we will design and implement techniques for tactual and auditory access to textual screen information, line graphics and geographical maps. Applications include text editing, electronic drawing, and an electronic atlas. Investigator: Karl Durre. Computer Aided Map Reading for Blind People: Most maps for sighted people contain far too much information for tactual encoding and access by blind persons. We are investigating interactive methods for selective tactual display of geographic information. Investigator: Karl Durre. Computer-mediated Communication between Blind/Deaf-Blind and Sighted Persons: Methods to use computers for direct communication between blind and sighted people are being investigated. In cooperation with the Colorado Department of Education and the Special Education Department at the University of Northern Colorado, a system that helps in the communication between blind students and sighted classroom teachers is being tested in some schools. Investigator: Karl Durre. Detail Enhancement in Volume Images: This project investigates methods for enhancing detail in 3D images rendered from medical imaging volume data such as Magnetic Resonance (MR), Computed Tomography (CT), and others. Digital signal processing techniques have been applied to MR data to enhance the display of fine detail in the generated images by improving the calculation of surface normal vectors used for shading. Research continues into improvements to the shading calculations and into the application of these techniques to other types of data in addition to MR. Investigator: Michael Goss. Visualization of Forest Growth and Succession: We are investigating rendering techniques for visualizing the results of forest growth and succession models using realistic images. Terrain elevation models, texture maps derived from aerial photographs, and synthetic texture maps generated by models based on real images will be used to generate still images and animations. Techniques such as fractal terrain inter- polation will be used to generate levels of detail in the results beyond that available from existing databases. Investigators: Michael Goss and Denis Dean (Forest Sciences). Distributed SISAL: We are extending a shared memory implementation of SISAL to execute on distributed, message-passing architectures. A primary issue is the automatic mapping of processes across the set of processors to distribute the workload while holding down communication costs. Other issues to examine include minimizing communication costs for arrays, distributing program code, collecting garbage cells, and examining variants of the basic evaluation strategy. Investigator: Dale Grit. Experimental Methods for Evaluating Planning Systems: AI planners are difficult to evaluate because they involve many components and are embedded in complex environments. We are developing tools to facilitate experiments to answer the following questions: How well does a component of a planning system work, and how will changing the task, environment or structure of the component affect its performance? Investigator: Adele Howe. Sponsor: Advanced Research projects Agency Developing Reliable plan Knowledge Bases: Knowledge based planning systems tend to be brittle when applied in real or realistic environments. To address this brittleness, we are developing a methodology and tools for analyzing why planning systems fail and determining how those failures can be prevented or repaired at run-time. Investigator: Adele Howe. Sponsor: National Science Foundation Dual Fourier Analysis: Algorithms and analysis of applications are being investigated for a broad class of dual and multiple Fourier series. Studies include numerical procedures, algorithmic implementation of closed form solutions, domains of applicability, existence of solutions, and applications to heat transfer, fracture mechanics, cryptology, and communication theory. Investigator: Robert Kelman. Reliability Management through Self-Testing: We explore methods of achieving ultra-high reliability in systems. Over long durations simple redundancy ceases to be useful and can actually decrease reliability. This investigation focuses on testing as a means for assuring a very high degree of readiness, in five areas. Readiness: To achieve a high assurance of health, a system must self-test periodically. The concept of readiness includes traditional and information- theoretic aspects of reliability. Techniques to evaluate readiness are under study. Design faults: As software testing progresses, faults are discovered and corrected, and several reliability models attempt to describe this process. We compare the effectiveness of these models, and investigate a unified model. Register- level self-testing: Increasing complexity requires that testability be examined at the register-sandwich level. We pursue measures of testability and fault coverage for use in optimal design. High performance testable architectures: High performance need not be incompatible with testability. We are investigating a highly testable microarchitecture that eliminates the partition between data and control, that is configurable as a RISC or CISC design, and that shows promise of high performance. Modeling faults caused by aging: We are trying to develop techniques to handle faults from aging of VLSI devices. Investigator: Yashwant Malaiya. Sponsor: Strategic Defense Initiative/Office of Naval Research. Software Reliability Management: We are developing techniques for achiev- ing software reliability within given constraints. The approaches include achieving accuracy for reliability growth models, development of static models for estimating defect density and reliability growth, evaluation of test effectiveness, software testability, relationship of test coverage based methods. Investigator: Yashwant Malaiya Sponsor: Strategic Defense Initiative/Office of Naval Research. Reliability, Performability and Scalability of Large-Scale Distributed Systems: This project will investigate the reliability, scalability and performability of large-scale distributed systems. As the number of system elements increases, the rate of failure of the system is expected to increase. The research focuses on two issues: the analysis of network reliability and performability, and the evaluation of techniques that can exploit redundancy in large-scale systems. The resulting higher reliability comes at the cost of reduced computing power. We will also investigate achievable performance/reliability tradeoffs and system scalability using various redundancy schemes. Investigator: Walid Najjar. Sponsor: National Science Foundation. Evaluation of Adaptive Routing Strategies in Interconnection Networks: The objectives of a routing algorithm are to provide (1) minimal latency, (2) freedom from deadlock and livelock and (3) tolerance to fault. In this project we evaluate the cost performance tradeoffs of various adaptive routing algorithms, both minimal and non-minimal in path length. The evaluation is based on statistical results derived experimentally from simulation as well as analytical results derived from queuing models of interconnection networks. Investigators: Walid Najjar and Pradip Srimani. The Architectures of Hybrid von Neumann-Dataflow Processors: In this project we adopt a quantitative approach to the evaluation of the critical parameters that determine the performance of hybrid von Neumann-dataflow processors. In particular, we focus on architectural design alternatives that permit exploiting instruction, data and stream locality in dataflow graphs with minimal or no loss of program parallelism. The results indicate that a significant performance improvement can be obtained by moving the dataflow execution paradigm to a coarser granularity with von Neumann like execution within a grain. Investigators: Walid Najjar and Wim Bohm. Sponsor: National Science Foundation. SISAL: We have developed, with other research groups, an implicitly parallel programming language. We share a common intermediate form and optimizers. SISAL has run on a dataflow machine at the University of Manchester, and is now available for several commercial MIMD systems, on a multi-vector processor, and many sequential machines. Current translation software produces code that runs sequentially as fast as conventional programs, and automatically yields efficient parallel execution on multiprocessors. Work continues on a distributed-memory version based on a custom virtual shared memory layer. A successor language version has been designed with improved array manipulation, a modern modular structure, a refined syntax, and full interface with other languages. Investigators: Rod Oldehoeft and Wim Bohm. Static Evaluation of Sequencing Constraints: Sequencing errors form an important class of faults in software. We can extend static data flow analysis techniques to detect violations of user-specified sequencing constraints. We have constructed and are experimenting with a sequencing analysis tool and constraint specification language. Investigator: Kurt Olender. Network Topology: Topology plays an important role in the performance evaluation of any computer network. The objective of this project is to investigate different fault tolerance properties and routing algorithms in the presence of faults. We examine known topologies and also design new, efficiently incrementable networks. Investigator: Pradip Srimani. Range Search: Multiple attribute key retrieval or multidimensional range searching has many applications in database management, computer graphics and computational geometry. Several interesting trie structures have been designed. We plan to develop parallel algorithms to improve execution time further. The objective is to develop our understanding of different data structures and algorithms. Investigator: Pradip Srimani. Distributed Systems: We are interested in designing fault tolerant mutual exclusion algorithms and evaluating their performance. Recently we have examined the problem of deadlock detection and studied heuristic algorithms for deadlock avoidance. Investigator: Pradip Srimani. Software Maintenance Toolkit: The Ada Maintenance Toolkit (AMT) project develops algorithms and tools based on them that facilitate code changes, based on fully incremental analysis. The AMT includes tools for regression analysis, metrics, chunking, and a new approach to user interfaces. We are also investigating ways to preserve a good object oriented design during maintenance. Lastly, addition of formal semantics will facilitate reverse engineering. Investigator: Anneliese von Mayrhauser. Software Reliability Simulator: There is much data about software reliability. Some are available early, during detailed design. Others are represented in program structure of higher level languages. We are building a simulator that takes these data and predicts software reliability levels based on them. Investigator: Anneliese von Mayrhauser. Performance Evaluation: Even simple computer systems can require very large stochastic Petri nets when modeled. We are investigating possible simplification and decomposition strategies that simplify the models while preserving a high degree of accuracy. Investigator: Anneliese von Mayrhauser. Domain-Based Testing: SLEUTH is a system that automatically generates scripts, command templates and commands for command driven software. We are investigating its appliation reuse, regression testing and test management. Investigator: Anneliese von Mayrhauser. Automatic Production of Code Generators: Chop is a system that reads a nonprocedural description of a computer instruction set, generates rules automatically that specify a code generator and a peephole optimizer, and produces a directly executable code generator and optimizer. The result is very efficient, executing up to 40 times faster than commonly used comparable software. Continuing work includes further speedups of the rewriting system, application to non-orthogonal architectures, and integration of a high-quality register allocator. Investigator: Alan Wendt. Enhanced Editors: This topic includes program editors specialized for version control and some business-oriented applications such as an automated non-WYSIWYG screen record layout system. Investigator: Alan Wendt. Genetic Algorithms: Genetic algorithms are a class of adaptive optimization procedures with applications in function optimization and machine learning. Several on-going research projects are investigating the fundamental principles by which these algorithms work with the goal of improving genetic algorithm implementations. Also, the use of genetic algorithms for designing and optimizing neural networks is being investigated. Investigator: Darrell Whitley. Sponsor: National Science Foundation. Applying Genetic Algorithms to Scheduling Problems: This research is aimed at building a sequencing optimization tool using genetic algorithms. The work involves improving the performance of genetic algorithms on classic sequencing problems such as the Traveling Salesman as well as real world scheduling systems. Investigator: Darrell Whitley. Sponsor: Coors. Applying Genetic Algorithms to Geophysical Applications: There are several problems in geophysics related to the interpretation of seismic data that involve the optimization of nonlinear functions which are known to be multimodal. We are exploring the use of genetic algorithms for solving these problems. We are testing new varieties of genetic algorithms that dynamically remap hyperspace during search. Investigator: Darrell Whitley. Sponsor: Colorado Advanced Software Institute. Collaborators: Amoco Production Company and Advanced Geophysical. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 20 Oct 93 13:49:58 EDT From: segall+@cs.cmu.edu (Ed Segall) Subject: Re: Help---references to CORDIC algorithm Message-ID: Sender: news@cs.cmu.edu (Usenet News System) Nntp-Posting-Host: hummingbird.warp.cs.cmu.edu Organization: School of Computer Science, Carnegie Mellon References: <1993Oct14.171601.3726@hubcap.clemson.edu> Date: Wed, 20 Oct 1993 17:49:42 GMT In article <1993Oct14.171601.3726@hubcap.clemson.edu>, Shigeru Ishimoto wrote: >Dear Grouper, > >I am looking for the paper on CORDIC algorithm which discovered in the >period 1960-1970. The algorithm was discovered again by Dr. Richard Feynman >in 1980's. >Could anyone give me information. >... The CORDIC algorithm is presented in Kai Hwang, "Computer Arithmetic, Principles, Architecture and Design," John Wiley & Sons, 1979, pp. 368-373. The presentation there is based on Walther, J.S., "A Unified Algorithm for Elementary Functions," SJCC, 1971, pp. 379-385. The original paper is Volper, J.E., "The CORDIC Trigonometric Computing Technique," IEEE Trans. Elec. Comp., Vol. EC-9, Sept. 1960, pp. 227-231. --Ed Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pcl@oxford.ac.uk (Paul C Leyland) Date: 20 Oct 93 19:04:44 Organization: Oxford University Computing Services, 13 Banbury Rd Oxford OX2 6NN Subject: RSA129 project passes 1 million mark One million and counting.... The RSA-129 project has just passed the one million relations mark. As of 5am UT, Wednesday 20 October 1993, hot-spare.mit.edu had received 1030805 relations. These are distributed as follows: 14263 full relations (fuls) 182353 partial relations (pars) 834189 double partial relations (pprs). The full relations are usable as they stand. The pars and pprs have to be further processed to find cycles. So far, we have 1679 cycles. When the sum of the fulls and the cycles reach 524400 we are almost done. A few hours work on a workstation, followed by some heavy crunching on a MasPar and we will know the Ultimate Answer (and I will be most upset if it turns out to be 42 :-) The number of cycles might seem to be disappointingly small. However, the number of cycles per par and per ppr grows quadratically with the number of relations collected. We had fewer than 100 cycles in from the first 250k relations; we now have 20 times as many cycles from only four times as many relations. Because we still have relatively few cycles, it is difficult to give an accurate estimate of how much further we have to go. However, I can give a guestimate which won't be too far out. We know from previous large-scale runs of MPQS, that the final total consists of about 20% fulls and 80% cycles. As we need something over half a million altogether, we can divide the number of cycles by one thousand, and call that the percentage completion. Accordingly, my best estimate is that we are about 14% done. As more machines come on-stream, we are collecting more and more relations per day. During October, we have averaged 24247 relations per day, with a peak of 31162 last Sunday. Machines tend to be more idle at the weekend; this shows up quite clearly in our statistics. It is difficult to determine exactly how many machines are contributing; certainly many hundreds. Even more would be nice, of course! What I can say is that we have allocated over 9000 UIDs so far. The following is also very rough and ready. My DEC 5000/25 generates one relation per 1100 seconds on average, and is rated at 15MIPS or so. Therefore, 24000 relations per day corresponds to an *average* compute power of 4600MIPS. That's a powerful supercomputer by most people's standards. Almost all of this computation comes from machine time that would otherwise go to waste. So, a big thank you to everyone who has contributed to the project so far. Your help is much appreciated. Anyone reading this who has not joined in yet, is invited to send email to rsa129-info@iastate for more information. All you need is a Unix box with at least 8Mb of memory, some idle cputime, and a desire to join in the largest single computation currently taking place anywhere on the Internet. Paul Leyland -- Paul Leyland | Hanging on in quiet desperation is Oxford University Computing Service | the English way. 13 Banbury Road, Oxford, OX2 6NN, UK | The time is gone, the song is over. Tel: +44-865-273200 Fax: +44-865-273275 | Thought I'd something more to say. Finger pcl@black.ox.ac.uk for PGP key | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Csaba Zoltani (ACISD|CTD|CMB) Subject: PPL Reference is made to the recent announcement in comp.parallel about PPL. We would be interested to know where the information is published and/or can the articles be downloaded electronically? Thank you for your reply in advance. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wuchang@adirondacks.eecs.umich.edu (Wu-chang Feng) Subject: Paragon sources Date: 20 Oct 1993 19:50:13 GMT Organization: University of Michigan EECS Dept., Ann Arbor, MI Nntp-Posting-Host: adirondacks.eecs.umich.edu I'm looking for references on Intel's Paragon. In particular, how the communication subsystem has been built. Thanks Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: marco@socrates.et.tudelft.nl (M. van Harsel) Subject: [REQ] Graph Partitioning using genetic algorithms Keywords: Graph Partitioning, genetic algorithms Organization: Delft University of Technology, Dept. of Electrical Engineering Dear Collegues, I am looking for references on Graph Partitioning using genetic algorithms. I already have all references on Talbi.El-Ghazali and Gregor von Laszewski. Any contribution will help. If you are interested, I will post a summary to the newsgroup. Thanks in advance, Marco van Harsel, e-mail : marco@duteca.et.tudelft.nl Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: Re: Parallel Fourier transforms From: scott@cannon.usu.edu (Scott Cannon) Reply-To: scott@cannon.usu.edu References: <1993Oct11.154829.27087@hubcap.clemson.edu> I may not completely understand the question but here is a simply answer: The FFT is inherently parallel -- an array of size 2^N can simply be broken into two halves -- the points with even indexes and those with odd indexes. Each half can then be transformed using a standard FFT. The results can then be combined to produce the grand transform with one more butterfly. Any signal-processing text will give you the simple mult/add operations of a butterfly. Naturally, each half could further be broken into two halves and so on... That is actually the basis of the FFT. At the lowest level, a 2-pt transform is simple: If the input array is (f0, f1) then the transform is F0 = (f0+f1), F1 = (f0-f1). Combining these many 2-pt transforms into larger transforms is the job of the butterfly stage -- one stage for each next combination of halves. That is why the FFT requires an array size which is a power of 2. Scott R. Cannon, PhD scott@cannon.cs.usu.edu Dept. of Computer Science (801) 750-2015 Utah State Univ. FAX (801) 750-3265 Logan, UT. 84322-4205 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: paprzycki_m@gusher.pb.utexas.edu Subject: Call for Papers Dear Netters, If you are interested in giving your company or your work more visibility behind a former iron curtain, or if you wish to contribute to the growth of our European colleagues'journal, please consider a submission to the following Special Issue. ---------------------------------------------------------------------------- CALL FOR PAPERS DISTRIBUTED AND PARALLEL REAL TIME SYSTEMS Special Issue of INFORMATICA Guest Editors: Marcin Paprzycki and Janusz Zalewski University of Texas-Permian Basin We would like to invite papers for the Special Issue of INFORMATICA, An International Journal of Computing and Informatics published in English by the Slovene Society Informatika and the Josef Stefan Institute in Ljubjana, Slovenia. The scope of the volume will encompass a variety of issues associated with the recent developments in the area of distributed and parallel real-time computing. Papers related to both hardware and software aspects of concurrency will be considered. Their focus should be on the timeliness and responsiveness issues (bounded response time) of respective solutions. Sample topics may include: - multiprocessor buses and architectures - real time features of local area networks - message scheduling in distributed systems - distributed and parallel operating systems - task allocation and load balancing in real time - interprocess synchronization and communication for real time - specification and programming languages - formal methods in specification and design - debugging of distributed real-time systems - designing parallel and distributed applications - distributed real-time databases - dependability, realiability and safety in distributed real-time systems - standardization. Only previously unpublished work will be accepted for the volume. All papers will be refereed. Due dates: * February 15, 1994 Submission deadline * May 1, 1994 Notification of the authors * June 1, 1994 Camera-ready versions due All correspondence and requests for sample copies of INFORMATICA should be addressed to the Guest Editors at the following address: Marcin Paprzycki and Janusz Zalewski Dept. of Computer Science University of Texas-Permian Basin 4901 E. University Blvd Odessa, TX 79762-0001 USA Phone: (915)367-2310 Fax: (915)367-2115 Email: paprzycki_m@gusher.pb.utexas.edu zalewski_j@utpb.pb.utexas.edu ---------------------------------------------------------------------------- Note. Prospective authors may be also interested in a topical event: The Second Workshop on Parallel and Distributed Real-Time Systems, April 28-29, 1994, Cancun, Mexico. For information on the workshop, please contact its Program Chairs: Dieter K. Hammer, Technische Universiteit Eindhoven wsindh@win.tue.nl Lonnie R. Welch, New Jersey Institute of Technology welch@vienna.njit.edu ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: K.S.Thomas@ecs.soton.ac.uk (Ken Thomas) Subject: Comett Course Date: 21 Oct 1993 09:14:34 +0100 Organization: Electronics and Computer Science, University of Southampton Keywords: Parallel Applications Applications for High Performance Computers Soultz, France Date: November 2nd, 3rd, 4th and 5th, 1993 Nov 2nd-5th, 1993 The aim of this course is to understand some aspects of current applications of high performance computers. There are three main objectives: 1. To give an overview of parallel hardware and software and to explore the role of performance critical parameters. Matrix kernels are also explored. 2. To give awareness of the tools that are likely to be important in the future. This includes HPF (High performance Fortran) and the message passing standards. 3. To put together applications in diverse areas of science and engineering. There are speakers on seismic modelling, CFD, Structural Analysis, Molecular dynamics and climate modelling. Programme. Day 1 14.00 Start Introduction and Welcome Session 1 Overview Introduction to Parallel Hardware Introduction to Parallel Software Panel Discussion Day 2 Start 09.30 Session 2 Performance Characterization Low-level Benchmarks and Performance Critical Parameters CFD Session 3 Applications I Seismic Modelling Climate Modelling Panel Discussion Day 3 Start 9.30 Session 4 HPC Standards HPF Message-Passing Interface Session 5 Parallel Matrix Kernels Structural Analysis Panel Discussion Day 4 Start 09.00 Session 6 The Parkbench Initiative Grand Challenge Applications Panel Discussion. Close 12.15 Cost 375 pounds sterling (Full Rate) 275 pounds sterling for academic participants and members of ACT costs include lunch and refreshments throughout the day. Minimum numbers 10 This course cannot be given unless there is a minimum of 10 participants. It will be necessary to receive the your registration no later than Monday 25th October, 1993. Should the course not run, then all registration fees will be returned. Applications for High Performance Computers Soultz, France Date: November 2nd, 3rd, 4th and 5th, 1993 Applications for High Performance Computing Registration Form Title . . . . . . . . . . . . . . . . . Surname . . . . . . . . . . . . . . . . First Name . . . . . . . . . . . . . . . Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tel: . . . . . . . . . . . . . . . .. Fax: . . . . . . . . . . . . . . . . . I enclose a cheque in the sum of . . . . . . . . . . . . . . . . . . Made Payable to a"University of Southampton". Please forward cheque and registration to Telmat Informatique. Venue: Telmat Informatique Z.1. - 6 Rue de l'industrie, B P 12 68360 Soultz Cedex France Local Accommodation Arrangements contact: Rene Pathenay/Francoise Scheirrer Telmat Informatique Tel: 33 89 765110 Fax: 33 89 742734 Email: pathenay@telmat.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel, comp.sys.super From: Lasse.Natvig@idt.unit.no Subject: Computational Science, definitions? Message-ID: Organization: /home/garm/a/lasse/.organization At the University of Trondheim, Norway, we are planning to build an educational programme in the field "computational science". (Other terms may be supercomputing, or scientific computing). As a start, we want to define the field, and study other programmes in this direction. In this context I would appreciate all kind of pointers to relevant definitions, and information about educational programmes. Pointers to important research institutions are also relevant. If there are interest in it, I will post a summary to the net. Thanks in advance. Lasse Natvig Associate Professor The Norwegian Institute of Technology The University of Trondheim Postal Address: E-mail: lasse@idt.unit.no Lasse Natvig UNIT/NTH-IDT Fax: +47 7 594466 O. S. Bragstads plass 2E Phone: +47 7 593685 N--7034 Trondheim, NORWAY Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: I Flockhart Subject: parallelising out of core global image processing operations Keywords: Parallelisation Image Processing Organization: Edinburgh Parallel Computing Centre I'm currently looking into the parallelisation of global and/or non-regular (3d) image processing operations, where the images concerned do not fit into distributed core memory. A typcial example might be the Fourier transform. I'm interested in collecting references for any literature that may help with this topic, and also to hear from anyone who has worked on similar problems in the past. If you can assist with either a reference or the benefit of experience, please email me directly at: ianf@epcc.ed.ac.uk I'll collate any references I get and post them back to comp.parallel at a later date. Thanks Ian ---------------------------------------------------------------------- | e|p Edinburgh Parallel Computing Centre | | c|c University of Edinburgh | ---------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: heinze@ira.uka.de (Ernst A. Heinz) Subject: Modula-2* Environment paper (updated version) available by anonymous ftp Organization: University of Karlsruhe, FRG Keywords: Modula-2*, portable parallel programming environment, optimizing compilation, high-level parallel debugging For all interested folks we have now made the final version of our MPPM'93 paper publicly available by anonymous ftp. The title of the paper reads as follows: "The Modula-2* Environment for Parallel Programming" (by S.U. Haenssgen, E.A. Heinz, P. Lukowicz, M. Philippsen, and W.F. Tichy). To retrieve PostScript or compressed Postscript versions of the paper please connect to i41s10.ira.uka.de [129.13.13.110] by anonymous ftp. There, the directory pub/m2s contains the corresponding files named mppm93.ps and mppm93.ps.Z. -rw-r----- 1 ftp ftp 557329 Oct 21 15:27 mppm93.ps -rw-r----- 1 ftp ftp 135382 Oct 21 15:27 mppm93.ps.Z Don't forget to use binary mode when retrieving the compressed PostScript versions! Please send us a short email containing your full name and address plus affiliation if you retrieve any of the above. For your information we include the abstract of our MPPM'93 paper below. -------------------------------------------------------------------------------- THE MODULA-2* ENVIRONMENT FOR PARALLEL PROGRAMMING This paper presents a portable parallel programming environment for Modula-2*, an explicitly parallel machine-independent extension of Modula-2. Modula-2* offers synchronous and asynchronous parallelism, a global single address space, and automatic data and process distribution. The Modula-2* system consists of a compiler, a debugger, a cross-architecture make, graphical X Windows control panel, runtime systems for different machines, and sets of scalable parallel libraries. The existing implementation targets the MasPar MP series of massively parallel processors (SIMD), the KSR-1 parallel computer (MIMD), heterogeneous LANs of workstations (MIMD), and single workstations (SISD). The paper describes the important components of the Modula-2* environment and discusses selected implementation issues. We focus on how we achieve a high degree of portability for our system, while at the same time ensuring efficiency. -------------------------------------------------------------------------------- In case of any questions or problems, please don't hesitate to contact us directly. Cheers. =Ernst= +--------------------------------------------------------+-------------------+ | Ernst A. Heinz (email: heinze@ira.uka.de) | | | Institut fuer Programmstrukturen und Datenorganisation | Make it as simple | | Fakultaet fuer Informatik, Universitaet Karlsruhe | as possible, but | | Postfach 6980, D-76128 Karlsruhe, F.R. Germany | not simpler. | | (Voice: ++49/(0)721/6084386, FAX: ++49/(0)721/694092) | | +--------------------------------------------------------+-------------------+ Newsgroups: comp.parallel,comp.sys.super From: stevehubcap.clemson.edu (Steve Stevenson-``Not the Moderator'') Subject: Re Approved: parallel@hubcap.clemson.edu > At the University of Trondheim, Norway, we are planning to build an > educational programme in the field "computational science". (Other > terms may be supercomputing, or scientific computing). As a start, we > want to define the field, and study other programmes in this > direction. > > In this context I would appreciate all kind of pointers to relevant > definitions, and information about educational programmes. Pointers to > important research institutions are also relevant. Available via ftp is a preprint of an article to appear in the CACM on this subject. It is one viewpoint, but it includes philosohical justification as well as the things we do here at Clemson and in an NSF Workshop. ============================================================ $ ftp hubcap.clemson.edu Connected to hubcap.clemson.edu. 220 hubcap FTP server (Ultrix Version 4.1 Mon Aug 27 12:10:05 EDT 1990) ready. Name (hubcap:): ftp 331 Guest login ok, send ident as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd destv ftp> bin ftp> CACM-revised.ps.Z =============end of instructions=================================== Steve (really "D. E.") Stevenson steve@hubcap.clemson.edu Department of Computer Science, (803)656-5880.mabell Clemson University, Clemson, SC 29634-1906 Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sson@nevada.et.byu.edu (Stacey Son) Organization: Brigham Young University, Provo UT USA Newsgroups: comp.parallel Subject: Re: Any good simulators Reply-To: sson@byu.edu References: <1993Oct15.121123.29739@hubcap.clemson.edu> >>>>> On Fri, 15 Oct 1993 12:11:23 GMT, leclerc@cis.ohio-state.edu (Anthony Leclerc) said: Tony> I'm teaching an "Architecture of Advanced Computer Systems" course in Tony> the Tony> Spring semester. Does anyone know of good simulators which are Tony> publically Tony> avaiable? Try the following: (1) Proteus from mintaka.lcs.mit.edu. Proteus is an execution-driven simulator that runs on MIPS and SPARC based machines. (2) dlxmsim from ftp.cc.gatech.edu. dlxmsim is a multiprocessor emulator based on the DLX instruction set. (3) Tango from Stanford. Tango is like Proteus. To get it you must get a license first. Send mail to comments@meadow.stanford.edu for more information or ftp to meadow.stanford.edu. (4) For a superscalar simulators try superdlx from wally.cs.mcgill.ca or Mike Johnson's PIXIE based stuff from velox.stanford.edu. Hope this helps, -- Stacey D. Son Pmail: 459 Clyde Building CAEDM Operations Manager Provo, UT 84602 Brigham Young University Voice: (801)378-5950 College of Engineering & Technology FAX: (801)378-5705 Office: 306A Clyde Building Email: sson@byu.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: west@jade.ab.ca (Darrin West) Subject: 65 processor Meiko for sale. Message-ID: <1993Oct21.233844.3378@jade.ab.ca> Organization: Jade Simulations International, Inc. Date: Thu, 21 Oct 1993 23:38:44 GMT We have some Meiko equipment for sale. The backplanes may be more valuable than the boards, in that it may support newer boards (can anyone comment on this?). This may be the perfect system for a university wanting to have a large number of nodes for experimentation, without the need for the most powerful cpu's. The cpu's in this box are around 1 Vax mips. 1 M40/M40 Enclosure - holds up to 40 boards 1 M10/M10E Enclosure - holds up to 10 boards 9 MK060 Quad compute Element with 4MBytes per processor 3 MK060 W/ 4cpu 8MBytes per cpu. 1 MK061 Single compute Element with 16MBytes per processor 1 MK014 1 MK28 1 MK050 Self host with SCSI 1 MK201 In-sun Quad compute element with 4MBytes per processor 1 MK200 In-sun Quad compute element with 4MBytes per processor 2 MK20? In-sun Quad compute element with 4MBytes per processor Various Operating Software This is a total of 65 processors, and tons of memory on little SIMS. We currently have this equipment attached to three different suns (one of them with no external cabinet). There may be the option of selling one of our sun3/160's with this equipment. It has several slots for the in-sun boards, which are needed to attach to the external cabinets. I dont recall ever having both cabinets hooked to the same sun, but that may be possible. We used CSTOOLS exclusively, and convinced a version of sun's C++ cfront to produce C code that the meiko compilers would accept. We will entertain practically any serious offer. Please feel free to get hold of me if you want details. I apologize if this post breaks the net rules. I dont have access to any groups that specialize in selling things, and I would likely want to cross-post here in any case. I sincerely hope that this system could be put to good use in a research environment somewhere. Maybe there is still some commercial use for this stuff too. Thanks for perusing this note. -- Darrin West, MSc. Jade Simulations International Corporation. west@jade.ab.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Wittmann@sunmail.lrz-muenchen.de Subject: Classification of parallel algorythms Organization: Technische Universitaet Muenchen I'm trying to classify parallel algorithms. Especially I'm interested in their characteristial SVM-properties (Shared Virtual Memory). Therefor I need some literature about application schemes and classification of algorithms, not only of parallel ones. If you know any literature dealing with this subject, please mail wittmann@informatik.tu-muenchen.de Thanks for your help Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: icw@ecs.soton.ac.uk (I C Wolton) Subject: RAPS Workshop Organization: Electronics and Computer Science, University of Southampton RAPS Open Workshop on Parallel Benchmarks and Programming Models Chilworth Manor Conference Centre, Southampton, UK 7-8 Dec 1993 Workshop Overview ----------------- This workshop will review recent developments in programing models for parallel applications, outline the features of some of the RAPS parallel benchmarks and present some complementary international initiatives. The final session will give the vendors an opportunity to present the results of the RAPS benchmarks on their latest machines. The RAPS Consortium ------------------- The RAPS consortium was put together to promote the creation of benchmarks for important production applications on massively-parallel computers (RAPS stands for Real Applications on Parallel Systems). As part of this activity it has a strong interest in adopting a programming model that can provide portability without excessive sacrifice of performance. The consortium consists of a number of users and developers of significant large production codes running on supercomputers. It is supported by a Consultative Forum of computer manufacturers which currently includes Convex, Cray, Fujitsu, IBM, Intel and Meiko. Codes being worked on for the RAPS benchmark suite include: PAM-CRASH - a finite element code mainly used for car crash simulations IFS/ARPEGE - a global atmospheric simulation code used for meteorology and climatology FIRE - a fluid flow code used for automotive flow simulations GEANT - used by CERN to simulate the interaction of high-energy particle showers with detectors Provisional Programme --------------------- The workshop will be held over two days, starting after lunch on Tuesday 7th December and finishing at lunchtime on Wednesday 8th December. Lunch will be available on both days. Tuesday 7 Dec, Afternoon Current status of RAPS Karl Solchenbach, PALLAS ESPRIT Application porting activities Adrian Colebrook, Smith, Guildford The PAMCRASH benchmark Guy Lonsdale, ESI The Proposed Message Passing Interface Standard (MPI) Ian Glendinning, University of Southampton Distributed Fortran Compiler Techniques Thomas Brandes, GMD Impact of Cache on Data Distribution Richard Reuter, IBM Heidelberg Workshop Dinner Wed 8 Dec, Morning The PEPS Benchmarking Methodolgy Ed. Brocklehurst, National Physical Laboratory The PARKBENCH Initiative T. Hey, University of Southampton The IFS spectral model: the 3D version with some preliminary results David Dent, ECMWF Vendor's Presentation of Results for the RAPS Benchmarks Registration Details -------------------- The registration fee is 120 pounds sterling, including lunch and refreshments. An optional workshop dinner is being arranged at 25 pounds per head. Accomodation is available at Chilworth Manor for 52.50 pounds per night. Cheques should be made payable to "University of Southampton" Bookings and enquiries to: Chris Collier Electronics & Computer Science Highfield University of Southampton Southampton S09 5NH Tel: +44 703 592069 Fax: +44 703 593045 Email: cdc@ecs.soton.ac.uk This form should be returned to the conference organiser, Chris Collier. Name ....................................................... Organisation ............................................... Address .................................................... .................................................... Telephone .................................................. Email ...................................................... Special Dietary Requirements ................................ ............................................................. Registration (Price in Pounds Sterling) : 50.00 I would like accomodation at 52.50 pounds per night for the nights of ............................................................. I would like to attend the workshop dinner at 25 pounds ...... Yes/No TOTAL FEE ENCLOSED ............................................... Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mende@rutgers.edu (Bob Mende Pie) Subject: nCUBE2S systems programmer job announcement in misc.jobs.offered Organization: 40.34N / 74.44W +25m Sol 3 I wanted to let people here know that this is a job announcement for a parallel systems programmer position at Rutgers University CAIP Center. The position will support, and program with a large nCUBE2S system as well as a few other types of parallel manchines. The full text of the posting should be available in misc.jobs.offered. Feel free to contact me (mende@caip.rutgers.edu) for more details of this position. /Bob... {...}!rutgers!mende mende@piecomputer.rutgers.edu mende@zodiac.bitnet -- /Bob... {...}!rutgers!mende mende@piecomputer.rutgers.edu mende@zodiac.bitnet Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: soc.college.gradinfo,comp.edu,comp.ai,comp.ai.genetic,comp.ai.neural-nets,comp.software-eng From: mmilde@hubcap.clemson.edu (Michael N Milde) Subject: Re: Ph.D. fellowships in CS (esp. AI, PP, SE) Organization: Clemson University, Clemson SC References: <1993Oct21.202255.24974@hubcap.clemson.edu> In article <1993Oct21.202255.24974@hubcap.clemson.edu> rro@CS.ColoState.EDU (Rod Oldehoeft) writes: > PH.D. FELLOWSHIPS AVAILABLE [stuff deleted...] >4. You must be a woman or a member of a minority group, or both. [stuff deleted...] I started reading this with great enthusiasm before I realized they discriminate against white males. The color of someone's skin doesn't concern me. I wish other people felt the same way... Mike mmilde@hubcap.clemson.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mendes@banjo.cs.uiuc.edu (Celso Luiz Mendes) Subject: DLXV simulator needed I'm looking for a simulator of DLXV, the vector version of DLX, as described in the book "Computer Architecture: A Quantitative Approach", by Hennessy & Patterson. In fact, any extension of another simulator (like the one for MIPS) to a vector architecture would be helpful. Thanks in advance for any help or pointers. -Celso ----------------------------------------------------------------------------- Celso Luiz Mendes Univ. of Illinois at Urbana-Champaign E-mail: mendes@cs.uiuc.edu Dep. Computer Science , DCL-3244 phone # : (217)333-6561 - office , (217)367-7355 - home Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rgomez@svarga.inria.fr (Roberto Gomez-Cardenas) Subject: SILICON GRAPHICS 4D/35 CHARACTERISTICS Sender: news@seti.inria.fr Organization: INRIA, Rocquencourt Hi everybody; We had implemented and mesured an algorithm in a SUN 4. In a reference we found a table of the CPU time of the same algorithm, for a Silicon Graphics 4D/35. To make a good comparation we need to known the characteristics of the Silicon 4D/35. There is anyone that can send me this information?. Thanks in advance; Roberto Gomez ______________________________________________________________ | | | Roberto GOMEZ CARDENAS | | e-mail: Roberto.Gomez-Cardenas@inria.fr | | fax: 39.63.53.30 "Imagination is more | | tel. 39.63.52.38 important than knowledge" | | INRIA - Rocquencourt A. Einstein | |______________________________________________________________| Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: chchen@hellcat.ecn.uoknor.edu (Changjiu Chen) Subject: How many solutions? Organization: Engineering Computer Network, University of Oklahoma, Norman, OK, USA In the following system, x_i is 1 or -1 and w_ij belongs to R. x_1 * ( w_11 + x_2 * w_12 + ... + x_n * w_1n ) > 0 . . . x_n * ( x_1 * w_1n + ... + x_(n-1) * w_(n-1)n + w_nn ) > 0. w is symmetric. Based on some given w, one can probably find many solutions of x. My goal is to find the maximum solutions of x associated with some particular w. I already found a specific w which let the system has combination(n,floor[n/2]) solutions of x. Can you prove or disprove that combination(n,floor[n/2]) is the maximum? Any comment is welcomed. Please email me directly. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: Help: hypercubes and fat trees From: Shigeru Ishimoto Dear Grouper, I am looking for the paper on the embedding hypercube in fat tree. Could anyone give information. Thanks, ----- _____ | A I S T Shigeru Ishimoto (ishimoto@jaist.ac.jp) | HOKURIKU 18-1 Asahidai Tatsunokuchichou Nomigun Ishikawaken Japan o_/ 1 9 9 0 Japan Advanced Institute of Science and Technology,Hokuriku Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: misc.jobs.offered,sci.math.num-analysis,comp.parallel From: bahendr@cs.sandia.gov (Bruce A. Hendrickson) Subject: Postdoc Opening at Sandia Labs Organization: Sandia National Laboratories, Albuquerque, NM Applied Mathematical Sciences Research Fellowship Sandia National Laboratories, Albuquerque, New Mexico The Computational Sciences, Computer Sciences and Mathematics Center at Sandia National Laboratories invites outstanding candidates to apply for the 1994 Applied Mathematical Sciences (AMS) Research Fellowship. The Fellowship is supported by the Officer of Scientific Computing of the U.S. Department of Energy. AMS Fellowships at Sandia provide an exceptional opportunity for innovative research in scientific computing on advanced architectures. They are intended to promote the transfer of technology from the laboratory research environment to industry and academia through the advanced training of new computational scientists. Candidates must be U.S. citizens, have recently earned a Ph.D. degree or the equivalent, and have a strong interest in advanced computing research. The Center maintains strong programs in a variety of areas, including analytical and computational mathematics, discrete mathematics and algorithms, computational physics and engineering, advanced computational approaches for parallel computers, graphics, and architectures and languages. Preference will be given to candidates applying in the fields of numerical analysis, computational science and parallel algorithm development. Candidates with knowledge of an application area (e.g., semiconductor device modeling, CFD, climate modeling) are especially encouraged to apply. Sandia provides a unique parallel computing environment, including a 1,872-processor Intel Paragon, a 1024-processor nCUBE 2, a 64-processor Intel IPSC, and two Cray supercomputers. The fellowship appointment is for a period of one year and may be renewed for a second year. It includes a highly competitive salary, moving expenses, and a generous professional travel allowance. Applicants should send a resume, a statement of research goals, and three letters of recommendation to: Robert H. Banks, Division 7531-121, Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185. The closing date for applications is January 31, 1994, although applications will be considered until the fellowship is awarded. The position will commence during 1994. For further information contact Richard C. Allen, Jr., at (505) 845-7825 or by e-mail, rcallen@cs.sandia.gov. Equal Opportunity Employer M/F/V/H U.S. Citizenship is Required Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: xfan@bubba.ece.uc.edu (Xianzhi Fan) Subject: References wanted Organization: University of Cincinnati Hi, All, I would greatly appreciate it if anyone can send me info about where are the references for Intel Paragon, Sparc Center 1000 and Cray-T3D. Regards, -- XianZhi Fan -+-+- Mail Location 30 xfan@thor.ece.uc.edu \ --+ Dept. of Electrical & Computer Engr. phone(o):(513)556-0904 \ |_| University of Cincinnati phone(h):(513)861-3186 / |__/ Cincinnati, OH 45221-0030 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mwesth@elaine.ee.und.ac.za (M.J. VAN DER WESTHUIZEN : PGRADS) Subject: Computing and Information (Shannon) Theory Organization: Elec. Eng., Univ. Natal, Durban, S. Africa Hi everybody, I have a strong gut feeling that a link must exist between Shannon's Information theory and computing, but cannot find the references nor do I have the time or detailed knowledge to investigate the matter myself. How does one match a computing problem to a computer specification in real- time applications; if you say MIPs or FLOPS how do compare a look-up table for say, the expansion series calculation of the sine of a floating point value, iow swapping memory for speed? If you use parallelism, you have to weigh up communication delays with computing speed ok, so then *communications* and *commputations* must be measurable in the same units. So can't one use Shannon's indices for measuring the *channel capacity* in terms of Entropy/second as a means of specifying computer power? Can't the signal/noise ratio of a channel be equated to the computer's accuracy (due to roundoff errors)? In my research on real-time use of transputer I often am confronted with the difference between computing bandwidth and *speed*, which I find to be very similar to the transmission line specifications of bandwidth and speed, which makes me ask this question. Can some of the Theoretical Computer Scientest perhaps help? Thank you Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.transputer From: JZHOU@FINABO.ABO.FI (Jun Zhou MATE) Subject: Qusetion about matrix in occam? Organization: ABO AKADEMI UNIVERSITY, FINLAND Date: Sat, 23 Oct 1993 18:29:33 GMT X-News-Reader: VMS NEWS 1.24 Hi Dear Friends: Can you recommend some routine about operation of matrix in Occam 2. it is about Addition, subtract, mulitiplication and inverse of matrix, Thank you for helping! Best Wishes ---- Jun Zhou Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: heinze@ira.uka.de (Ernst A. Heinz) Subject: Availability of Modula-2* Programming Environment Organization: University of Karlsruhe, FRG Sender: newsadm@ira.uka.de After putting the updated version of our MPPM'93 paper on ftp I received several mails asking for the availability of our Modula-2* Programming Environment. Because this might be of public interest I now post an answer to the net. "The Modula-2* people (email: msc@ira.uka.de) at the University of Karlsruhe, F.R.G., are currently preparing a new version of their Modula-2* programming environment to be publicly released at the end of November. The upcoming release will consist of binaries for DECStations and SparcStations. Target architectures include the MasPar MP series, the KSR-1, and several sequential Unix workstations." We will announce the availability of the new version on the net as soon as it is ready for public release. So please stay tuned! Cheers. =Ernst= +--------------------------------------------------------+-------------------+ | Ernst A. Heinz (email: heinze@ira.uka.de) | | | Institut fuer Programmstrukturen und Datenorganisation | Make it as simple | | Fakultaet fuer Informatik, Universitaet Karlsruhe | as possible, but | | Postfach 6980, D-76128 Karlsruhe, F.R. Germany | not simpler. | | (Voice: ++49/(0)721/6084386, FAX: ++49/(0)721/694092) | | +--------------------------------------------------------+-------------------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,misc.jobs.offered From: tph@beta.lanl.gov (Thomas Hughes) Subject: Job Opportunity in Parallel Algorithm Development Message-ID: <1993Oct24.025005.26459@newshost.lanl.gov> Sender: news@newshost.lanl.gov Organization: Mission Research Corporation Date: Sun, 24 Oct 1993 02:50:05 GMT Job Opportunity in Parallel Algorithm Development Mission Research Corporation, Albuquerque, is seeking a highly motivated individual to participate in development of parallel algorithms for smoothed particle hydrodynamics on machines such as the Paragon and CM-5. Strong background in numerical methods as evidenced by journal publications required. Experience with C and Fortran on parallel machines is very desirable. Ph.D. in applied math, physics or related area plus 2 or more years of post-doctoral experience is preferred, but MS with several years of independent research experience will be considered. Please mail resumes, with publication/presentation list to : Tom Hughes, Mission Research Corp., 1720 Randolph, SE, Albuquerque, NM 87106 US citizenship or permanent residence required. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lama@nile.eng.ohio-state.edu (Lama Hamandi) Subject: architecture of parallel machines Message-ID: <1993Oct24.142414.29625@ee.eng.ohio-state.edu> Sender: news@ee.eng.ohio-state.edu Organization: The Ohio State University Dept of Electrical Engineering Date: Sun, 24 Oct 1993 14:24:14 GMT Hello fellow netters, I am looking for some references on the detailed architecture of some parallel machines and their corresponding processors. I am interested in the Delta, Paragon, Cray C-90 (and any newer version of the Cray family!) and the Sigma machine. If you know of any papers, reports or books please post them on the net or email me: lama@ee.eng.ohio-state.edu Thanks lama Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: abw@maths.uq.oz.au (Alan Williams) Subject: Re: The Future of Parallel Computing Organization: Prentice Centre, University of Queensland References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> dbader@eng.umd.edu (David Bader) writes: > Is the same true for the emergence of parallel computing? In my >opinion, no. We have not ground out a "standard" representation for >the model of parallel computing. We have not put enough effort into >the theory of parallel algorithmics. Throwing faster hardware at us >will not solve the problem. Even if the benchmark time for a given >application is cut in half, what happens as we try to increase the >problem size by a factor of K ? The compiler then must have the task >of decomposing the algorithm onto the underlying hardware. It is just >wrong to require the programmer to have a detailed knowledge of the >hardware, data layout, and compiler tricks just to get anywhere near >"benchmarked" performance rates. I think you're expecting too much from the compiler writers. There are so many paradigms (sp?) for parallel computing it would be nearly impossible to define a 'standard model'. The whole point of the different architectures is to solve different problems. > We are now in an age when the high performance machines have >various data network topologies, i.e. meshes, torii, linear arrays, >vector processors, hypercubes, fat-trees, switching networks, etc.. >etc.. These parallel machines might all have sexy architectures, but >we are headed in the wrong direction if we don't take a step back and >look at the future of our work. We shouldn't have to rewrite our >algorithms from scratch each time our vendor sells us the latest >hardware with amazing benchmarks. I think you're approaching this precisely backwards. The new machines are invented to handle the different (or new) algorithms. A massively parallel machine does some things very well, and others not so well; and the same can be said for vector processors, etc. I think that usually, one would decide which computer to use or buy based on the problem to be solved, not buy a computer and then try to write a problem to solve with it. However, if you have a problem that could take advantage of a new architecture that comes along, it's worth doing some re-structuring if necessary. >Benchmarks should also be >attainable from STANDARD compiler options. We should NOT have to >streamline routines in assembly language, give data layout directives, >nor understand the complexities of the hardware and/or data network. You get out what you put in. Alan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wittmann@Informatik.TU-Muenchen.DE (Marion Wittmann) Subject: re: classification of algorithms Originator: wittmann@hphalle7d.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Date: Mon, 25 Oct 1993 13:33:01 +0100 A few very nice people have answered to my news : I'm trying to classify parallel algorithms. Especially I'm interested in their characteristial SVM-properties. Therefor I need some literature about application schemes and classification of algorithms, not only of parallel ones. If you know any literature dealing with this subject, please mail wittmann@informatik.tu-muenchen.de Thanks for your help Below you can find a summary of all answeres I got till now %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From: Carlos Falco Korn To: wittmann@informatik.tu-muenchen.de Subject: Classification of algorithms Hi Marion, I'll write in english in case you want to collect and send all responses to the net. We have developed in Basel a methodology for classifying parallel algortihms; we call it BACS (Basel Algorithm Classification scheme). The latest report is available through anonymous ftp from lucy.ifi.unibas.ch, subdirectory baks It is in Postscript format and prints out nicely on NeXT and Macs. If you have problems with it, you can obtain a hardcopy by writing to Prof. Dr. H. Burkhart Institut fuer Informatik, Uni. Basel Mittlere Str. 142 4056 Basel, Schweiz and asking for the report. The classification concentrates on 3 major parts relevant to parallel algorithms: processes, interactions and data. We have identified the most usual attributes related to these three aspects. Work on this project is still going on: things have focused lately on the development of tools based on the classification (see report). But now, there is also major work on giving the terminology a major overview, ie. concretizing several points that have been a little bit neglected. If everything works well, the new report should be finished before X-mas. Note that it will represent ONLY a revision, this means that the structure of the classification (given in the actual report) will NOT change. I left Basel 1 month ago, and am now at Manchester University. I intend to relate BACS to SVM programming models (we have a KSR in Manchester), and check the consequences: BACS relies heavily on message passing, and I would like to generalize it by regarding SVM. Therefore I am naturally interested in your work. Could you explain a bit further what you intend to do, ie. what you expect to find out? I would appreciate a short comment. Cheers, Carlos PS: feel free to contact me on any question on BACS. ************************************************************* * Dr. Carlos Falco Korn | * * Center for Novel Computing | * * Dept. of Computer Science | tel: +44-61-2756144 * * The University | fax: +44-61-2756204 * * Manchester, M13 9PL | email: korn@cs.man.ac.uk * * United Kingdom | * ************************************************************* %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From: A.D.Ben-Dyke@computer-science.birmingham.ac.uk To: " (Marion Wittmann)" Subject: Re: classification of algorithms You can try the following: @Book{Parallel:Algorithms, author = {Alan Gibbons and Wojciech Rytter}, title = {Efficient Parallel Algorithms}, publisher = {Cambridge University Press}, year = {1988}, annote = {Gives an overview of the techniques used in developing parallel algorithms - it uses the PRAM as the underlying computational model, and describes some of the more basic paradigms (DC, graph stuff) before going on to cover some more complicated structures. All the usual stuff on sorting, parsing, string matching and graph algorithms, as well as a chapter on P-completeness - the class of hardly parallelisable algorithms.}, got = {In Simon's Collection}, } This has a fairly comprehensive bibliography and the PRAM style is close(ish) to the SVM approach. The following is interesting, but not directly relevant: @TechReport{BACS:Skeletons, author = {Helmar Burkhart and Carlos Falco Korn and Stephan Gutzwiller and Peter Ohnacker and Stephan Waser}, title = {{BACS}: Basel Algorithm Classification Scheme}, institution = {University of Basel, Switzerland}, number = {93-3}, year = {1993}, month = {March}, email = {bacs@ifi.unibas.ch}, ftp = {lucy.ifi.unibas.ch:/baks/BACS_1.1_english.ps.Z}, annote = {Describes the software dilemma: learning phase is enormous, too many details required from the user, writing correct programs is very difficult (=> limit on size of manageable system), not protable. Two approaches are discussed: 1. the use of virtual machines to avoid machine dependencies, but they must be general enough to be implementable on a range of machines but not so abstract as to be inefficient. 2. orthogonality of computation and coordination (i.e. implicit parallelism). Another method, algorithmic reusability is proposed, which sits between the two previously mentioned methods (i.e. can provide an intermediate language for mapping a very HLL to a low level one such as Linda). The system defines two types of process - normal computation and demons, with the first step in the process being to develop a computation (with or without demon) and interaction graph (active and passive respectively). As the graph can change at runtime (controlled by the demon) a set of construction (for creating the processes) and destruction (for dismantling and garbage collection) rules need to be provided - one algorithm may have several different rules. A hierarchy of algorithm toplogy is developed with regular and irregular (each process has its own construction rule) and homgenous (all instructions are the same within a group) and inhomogenous (worker) providing the first two levels. Also an algorithm is static (construct/destruct ops called once) or dynamic. Getting back to comms, direct and global (more than 2 processors involved - and either being coupled (explicit involvement) or decoupled) comms are diferentiated. Now the (de)comp rules need to specify what data is distributed and how is it to be achieved and whether they're input/output/temporary. Then global atmoic variables are covered using local, global and deicated (static/dynamic) as the categories with monotone variables being supported. The data granuality is then defined as fine, medium or coarse (depending on the relationship between the total size of the struct vs the amount processed locally). Then the macroscopic program structure is defined (i.e. skeletonish) and the process granuality (fine => SIMD, medium, coarse) and the algorithm is then classified (Static/Dynamic Process, Global/Static/Dynamic Data with more than one attribute being assignable) and typical examples of each discussed. On top of this classification, other attributes are included: toplogy (data+process), time.}, got = {yes}, } Cheers, Andy. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From: Fethi A Rabhi To: wittmann@informatik.tu-muenchen.de Subject: Classification I have made a classification but for a different purpose. I can send you a paper if you give me your address, you might find useful references. Regards, --------------------------------------------------------------------- Dr Fethi A. Rabhi Email : far@dcs.hull.ac.uk Computer Science University of Hull Tel : +44 (0)482 465744 Hull HU6 7RX (UK) Fax : +44 (0)482 466666 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 25 Oct 93 05:56:00 PDT From: hecht@oregon.cray.com Subject: Delivering Parallel Performance It seems that people talking about parallel performance often confuse/hide the important low details that provide performance. Performance on parallel programs depends on several key factors regardless of shared memory or distributed memory implementation. These factors are: 1) Memory bandwidth a) Memory Latency b) Memory Bandwidth/Communication Bandwidth 2) Parallel Coordination a) Updates to Global data b) Barrier synchronization 3) Program Implementation a) Compute Intensity (ratio of ops/word) b) Amount of Parallel synchronization Looking at memory latency we find shared memory latency 10 to 10,000 times LESS to non-local data than distributed or cluster resources. This overhead factor, combined with the easier shared memory programming model, allows one to achieve greater relative performance. Among commercial shared memory systems the Cray APP is only superseded in this measure by the Cray C90. SHARED MEMORY LATENCY ===================== Machine CPUS CPU type Year Measured Latency --------------- ---- ----------- ---- ---------------- CRAY 1 1 proprietary 1976 188 nanoseconds VAX 11/780 1 proprietary 1978 1200 nanoseconds Cyber 205 1 proprietary 1981 1200 nanoseconds FPS-164 1 proprietary 1981 543 nanoseconds CRAY X-MP 2 proprietary 1982 171 nanoseconds FPS-264 1 proprietary 1984 159 nanoseconds Convex 210 1 proprietary 1987 440 nanoseconds Convex 240 4 proprietary 198? 440 nanoseconds CRAY Y-MP 8 proprietary 1988 150 nanoseconds CRAY S-MP 8 SPARC 1989 900 nanoseconds CRAY Y-MP/C90 16 proprietary 1991 100 nanoseconds CRAY APP 84 Intel i860 1991 150 nanoseconds SGI Challenge 18 Mips R4000 1993 853 nanoseconds Cray APPs can and are cluster together by the medium of HIPPI. Such a cluster is presented for comparison with other diverse distributed memory systems. Distributed Memory/Cluster Memory ================================= MEASURED Perf Peak Perf ============= =========== Machine BW/PE Lat BW/PE Lat Source cpus MB/s usec MB/s usec ---------------- ---- ------ ------ ------ ------ ------------------- KSR 32pe 32 19 7 HSpeed Computing Conf CRAY APP-cluster 1008 92 9 100 0.2 Compcon paper '93 Meiko CS-2 ? 44 25 10 OSU RFP58030026 pub. info KSR 1088pe 1008 5 25 HSpeed Computing Conf Intel Delta 240 7 45 30 0.15 joel@SSD.intel.com '93 RS/6000 IBM V-7 ? 5 140 express Newsletter '93 Convex HP/Meta-1 ? 12 150 cicci@hpcnnn.cerh.ch '93 Intel XP/S ? 14 159 OSU RFP58030026 pub. info nCube/2 ? 2 154 ruehl@iis.ethz.ch '93 IBM SP1 8-64 64 4 220 40 0.5 elam@ibm.com '93 RS/6000 bit-3 ? 10 240 express Newsletter '93 RS/6000 ethernet ? .1 3500 express Newsletter '93 Definitions ----------- us - micro seconds (10^-6 sec) BW - Bandwidth (measured in MBytes/sec) MB/s - MegaBytes/sec This memory/communication latencies are the bottleneck on the overheads associated with parallelism (barriers, critical sections, shared updates, etc.) And this directly affects performance on real algorithms and the speedups that can be obtained. =========================================================================== Pat Hecht Cray Research Superservers, Inc. hecht@cray.com =========================================================================== Other information ----------------------------------------------------------------------- * HP Meta-1 (7100 chip, FDDI connection), 11.5 MB/s on packets of at least 1kb. * CRAY APP (i860 based, each CRAY APP has 84 PE, up to 12 systems in a cluster, for up to 1008 processors) * RS/6000 IBM v-7 switch * Hspeed Computing Conf = The Conference on High Speed Computing 3/29/93 * OSU RFP58030026 = part of an RFP for a computer system, by Oregon Law this info is part of the public record APP Background (for those who don't know, ignore if you know) ------------------------------------------------------------- Up to 84 processors (i860) per APP module flat shared memory (equal access) via crossbar technology ANSI HIPPI ports for clustering or networking and VME for I/O subsystem low parallel overheads in a FORTRAN or C programming environment Peaks Rates (6 Gflops 32-bit, 3 Gflops 64-bit) (it really sustains Gflops on lots of stuff - FFTs, Seismic, radar, image processing, solvers, etc.) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: coats@cardinal.ncsc.org (Carlie Coats) Subject: Re: The Future of Parallel Computing Organization: North Carolina Supercomputing Center References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> I just read David Bader's missive on directions for parallel computing, and I rather tend to agree. I would go further and say that the efforts I see being made toward standarization are ill-advised -- in particular, the efforts being made toward standards for message-passing programming. IMHO, message passing is at altogether too low a level for good software engineering. In principle, I could use it for the sort of parallel applications I am trying to build. I could build them in Cray Assembly Language, too, -- but I don't. We at MCNC are involved with building the first well-engineered generation of air quality models. One of the major problems with previous models was their reliance upon sequential access to sequential files. Consequently, the models tended to become (stated flippantly) conspiracies to manipulate a vastly complex state associated with a number of sequential files, their structures and their current file pointers. Our answer to this has been to build a modeling-specific-query API on top of direct access files (the quintessential query being "Interpolate 3D gridded variable V from file F to time T"). This sort of file access (together with a conscious encapsulation in the design of the relevant atmospheric process simulation modules) has lead to a much simpler, more extensible and maintainable model structure -- at least for single-grid models. It has also exposed an enormous amount of data parallelism for the compilers (we are showing an 80% speedup even on a Cray -- the old RADM model shared grid traversal responsibilities between the main and its science modules; with the new design, we are able to push the horizontal grid traversal into innermost loops; we get vector lengths of 1120 (=35*32), instead of just 35.) We are shortly going to start working with (two-way) nested models, nested to multiple levels eventually. Nest structure will tend to be volatile, changing from application to application. Consequently, I am reluctant to use the monolithic-program approach used by current nested models. I would prefer to build a nested model from multiple *separate* but *cooperating* programs, each of which encapsulates the simulation -- and the data parallelism --associated with its own grid. Furthermore, I don't want to *have* to phrase the cooperation in terms of message passing. That would just get us back into the shared-state quagmire we have just spent so much work climbing out of. I would much rather phrase the communications in terms compatible with what we are using for file access, with the added proviso that queries *block* until the data to satisfy them become available. Thanks, Carlie J. Coats, Jr., Ph.D. phone (919)248-9241 MCNC Environmental Programs fax (919)248-9245 3021 Cornwallis Rd. coats@mcnc.org RTP, NC 27709-2889 xcc@epavax.rtpnc.epa.gov "My opinions are my own, and I've got *lots* of them!" Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Douglas Hanley Subject: Paralation Model of Parallel Computation Organization: Department of Computer Science, University of Edinburgh This query concerns itself with Gary Sabot's Paralation Model of Parallel Computation. Currently I am attempting to implement the Match and Move operators of this programming model across a MIMD architecture in order access its performance characteristics (to date, I think it has only been implemented on SIMD and Vector parallel platforms). I intend to use C as the base language to be extended with Match & Move. I know that a language called Paralation C was being experimented with in 1988 and I would like to hear from anybody who knows where I can get further information on it's current state or has personal experience of it. In addition I would be grateful to hear from anybody who has any experience or information regarding attempts to implement this programming model in any base language on MIMD platforms. Thanks in advance for any help... Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel To: comp-parallel@news.Germany.EU.net Path: teutonia!hohndel From: hohndel@informatik.uni-wuerzburg.de (Dirk Hohndel) Newsgroups: comp.parallel Subject: Re: terminology question Date: 25 Oct 1993 14:47:14 GMT Organization: University of Wuerzburg, Germany References: <1993Oct20.121916.28264@hubcap.clemson.edu> Nntp-Posting-Host: winx13 X-Newsreader: TIN [version 1.2 PL2] Greg Wilson (EXP 31 dec 93) (Greg.Wilson@cs.anu.edu.au) wrote: : I have always used the term "star" to refer to a topology in which every processor : is connected to every other; however, I am told that the term is also used for : topologies in which processors 1..N are connected to a distinguished central : processor 0. Assuming that the latter definition is more common, is there a term : for topologies of the former type? what you mean is the Crossbars topology. Dirk -- _ _ _ _ _ | Lehrstuhl Informatik I | | | |_) |/ |_| | | |_| |\ | | | |_ | | Universitaet Wuerzburg |_/ | | \ |\ | | |_| | | | \| |_/ |_ |_ | Am Hubland, D-97074 Wuerzburg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: O.Naim@ecs.soton.ac.uk (Oscar Naim) Subject: Re: Info on PABLO wanted Organization: Electronics and Computer Science, University of Southampton References: <1993Oct20.121909.28171@hubcap.clemson.edu> In <1993Oct20.121909.28171@hubcap.clemson.edu> jgarcia@cse.ucsc.edu (Jorge Garcia) writes: >I'm looking for any information available on the PABLO system, from >the University of Illinois (I think). Does anyone know where I can >find technical articles about it, documentation, or any other source >of information? Please reply directly to me: jgarcia@cse.ucsc.edu >Thanks in advance, >Jorge Hi, I hope this information will help you. Cheers, Oscar Naim. >From pablo@edu.uiuc.cs.cymbal Tue Sep 14 15:49:57 1993 Received: from [080039002d26/DCC.38826110000503030002020103] by bright.ecs.soton.ac.uk; Tue, 14 Sep 93 15:38:22 BST Received: from a.cs.uiuc.edu by sun3.nsfnet-relay.ac.uk with Internet SMTP id ; Tue, 14 Sep 1993 15:45:47 +0100 Received: from cymbal.cs.uiuc.edu by a.cs.uiuc.edu with SMTP id AA10093 (5.64+/IDA-1.3.4 for O.Naim@ecs.soton.ac.uk); Tue, 14 Sep 93 09:45:40 -0500 Received: by cymbal.cs.uiuc.edu (4.1/SMI-4.1) id AA20182; Tue, 14 Sep 93 09:45:39 CDT Date: Tue, 14 Sep 93 09:45:39 CDT From: pablo@edu.uiuc.cs.cymbal (Pablo Account) Message-Id: <9309141445.AA20182@cymbal.cs.uiuc.edu> To: O.Naim Subject: Pablo Release 2.0 now available Sender: pablo@edu.uiuc.cs.cymbal Content-Length: 11621 X-Lines: 266 Status: OR This mail is to announce the availability of version 2.0 of the Pablo Performance Analysis Environment. If you wish to be removed from the Pablo mailing list or receive multiple copies of this announcement, please let us know by sending email to pablo@guitar.cs.uiuc.edu. The Pablo environment presently includes: + Motif-based interface for the specification of source code instrumentation points (both trace and count data). + C parser that can generate instrumented application source code. + Performance data trace capture library for single processor Unix systems, for the Intel iPSC/2 and iPSC/860 hypercubes, and for the Thinking Machines Corporation CM-5 using CMMD (version 3.0). + Flexible self-documenting data metaformat and associated tools that can be used to describe and process diverse types of data. + Graphical performance analysis environment that runs on Sun SparcStations, DecStations running Ultrix, and the Intel Paragon. The environment is based on the graphical configuration of directed data analysis graphs, that can be used to analyze and display dynamic performance data. + Standalone sonification system, called Porsonify, that can be used to map data to sound using MIDI synthesizers and sampled sounds on the Sun SparcStation. + Set of graphical display widgets for the X window system. These include bargraphs, dials, scatterplots, kiviat diagrams, and contour plots. This distribution is directed at academic and government research sites. You may freely retrieve, use, and modify the Pablo software as long as it is not for commercial gain. IF YOU ARE A COMMERCIAL ORGANIZATION AND WISH TO USE THE PABLO SOFTWARE INTERNALLY OR YOU WISH TO INCORPORATE THE PABLO SOFTWARE INTO A COMMERCIAL PRODUCT, YOU MUST COMPLETE A SOFTWARE LICENSE. Commercial evaluation licenses are available at minimal or no charge. As a precursor to licensing, Commercial organizations may retrieve copies of the Pablo documentation (BUT NOT THE SOFTWARE) without charge. Documentation and software are available via ftp on the system named bugle.cs.uiuc.edu (128.174.237.148) The ftp server enforces some access restrictions to minimize the impact on our systems. File retrieval constitutes consent to the terms of the license, and the ftp server maintains a log of files transferred. To retrieve the Pablo software and documentation, connect to ftp server in the normal way, log in as anonymous, and specify your electronic mail address as the password. The Pablo software and documentation are located in the pub directory. In that directory, you will find the following: Pablo.license the terms of the Pablo software license README introduction to license agreement Release-2.0 subdirectory containing the current release All items are compressed (.Z) files. Source code is in tar format; documents are in Postscript. All directories contain README files with additional details. YOU ARE STRONGLY ENCOURAGED TO RETRIEVE AND READ THE DOCUMENTS PRIOR TO RETRIEVING THE CODE. Pablo is a large, complex system --- you should be sure that you have the requisite tools and disk space before retrieving and installing the software. Thanks for your interest in the Pablo Analysis Environment! **************************************************************************** Excerpts from the pub/Release-2.0/README file: **************************************************************************** Changes since Release 1.1: Environment: -- Source has been modified to build with a more recent version of the g++ compiler (2.4.5). -- Moved to X11R5 patch level 25 and Motif release 1.2.2. -- Built on DecStation running ULTRIX V4.2. (Not including Audio) -- Built on SparcStation using Cross-compiler tools for Intel Paragon. (Not including Audio), producing an executable for the Intel Paragon. SDDF library and Utility Programs: -- Modified SDDF format to include a header for each file. This header allows verification that a file is in SDDF format. It also supports on-the-fly reading and writing of binary files whose byte order is different than that of the machine where the code is running. Information on converting files and saved configurations to the new SDDF format is included in the release. -- Added new methods to support seeking on SDDF files. This is of interest only if you are heavily involved in creating and manipulating SDDF files with your own programs. It is transparent to the casual user. -- Added a new utility program "FileStats" to accumulate and report statistics for SDDF files. Provides useful guidelines for configuring module parameters in the Pablo Visualization system. -- Added greater precision to the ASCII SDDF format for types float and double. Pablo Visualization: -- Expanded error checking in the user interface. -- Added display to FileInput modules showing the percent of the file processed thus far. Can be disabled by setting the Pablo app-defaults resource Pablo.displayBytesProcessed to FALSE, or interactively using the Configure Defaults dialog. -- Modules with no output pipes can be removed from the execution graph. This permits graph editing. -- Added option to allow reconfiguration of Module Parameters without showing pipe/record/field binding information. -- An execution graph can be deleted and another entered without exiting the Pablo environment. -- New analysis and display modules (Performance Utilization; Clustering) -- Added informal documents on dealing with Black and White displays and writing new modules. Pablo Instrumentation: -- Changed all timing measurements (including timestamps) from 32-bit integers to 64-bit integers to prevent clock wraparound in long-running applications on machines with fast clocks (e.g. the Intel iPSC/860 user-level, which runs at 10 MHz). The Timestamp field in trace files is no longer a 32-bit scalar, but a vector of two 32-bit integers. -- Added redundant floating-point timestamps and other time measurements to all trace records. These double precision scalar values are, unlike the above-mentioned Timestamp fields, independent of the system clock rate and report time in consistent units of Seconds (the name of the field). These fields were added to make timing analyses easier, although at the cost of a possible loss of precision. -- Implemented elementary clock synchronization on Intel hypercubes to counter clock drift. -- Ported the instrumentation library to the Thinking Machines Corporation CM-5 using CMMD (version 3.0). See the instrumentation environment user's guide for limitations on this port in the current release. -- The instrumentation library produces trace files in the new SDDF format, compatible with the Pablo visualization software and tools contained in this release. The SDDFMerge program uses the new Seconds field as the default merge key field, although the user may choose any scalar-valued field that appears in every record, monotonically nondecreasing through each file to be merged. -- The message-passing extension to the trace library permits instrumentation of global message-passing operations, in addition to message send and receive operations. The versions are system-dependent, e.g. the CM-5 has CMMD global operations that do not directly correspond with Intel hypercube global operations. In blocking receive trace records, a field has been added indicating the source node for the message. -- Programs instrumented with the Pablo instrumentation parser and user interface (iPablo) now feature a run-time modifiable instrumentation activation/deactivation flag. Without recompiling, tracing may be turned on or off dynamically, either under software control or more directly by the user through a debugger. There is also an invocation option to select the initial trace flag setting. -- Procedure and loop trace records produced by a program instrumented by iPablo include fields indicating the source code location of the instrumented program construct. This data will be used in a future release to more fully integrate performance data with the task of instrumenting application software. ----------------------------------------------------------------------------- Supported platforms and disk space requirments: The Pablo instrumentation library was developed and tested on Intel iPSC/2 and iPSC/860 systems and on the Thinking Machines CM-5 using CMMD 3.0. There is also a portable version which may be used to instrument serial programs on BSD UNIX machines. The instrumenting parser and GUI (iPablo) are supported on the same machines as the Pablo visualization software. The Pablo visualization and sound environment in this distribution was developed and tested on the following hardware/software platforms: + SparcStation 10-GX SunOS 4.1.3 gnumake 3.62 g++/gcc 2.4.5 -or- ATT Cfront 3.0.1 X11R5 patch level 25 Motif 1.2.2 perl 4.0 WCL 1.06 (from X11R5 contrib/lib/Wcl) WCL is only used by the Sound portions of the system. If you choose not to build those, you will not need WCL. The Pablo visualization system without the sound environment has also been built and tested on the following hardware/software platforms: + DecStation 5000/200 Ultrix 4.2 gnumake 3.58 g++/gcc 2.4.5 X11R5 patch level 25 Motif 1.2.2 perl 3.0 + Intel Paragon: OSF/1 Release 1.0.1 gnumake 3.62 CC Release 3.0 cc Paragon Rel 4.1.2 X11R5 patch level 25 Motif 1.2.2 * This was compiled on a SparcStation using the cross-development tools provided by Intel. A few minor changes to include files were necessary to do the build. These changes will be incorporated into Intel's next release. Total disk space required for the Pablo source and binaries should not exceed 55Mb on a SparcStation with dynamic libraries. More will be required for systems that do not support dynamic libraries. Approximate space requirements (in Mb) for individual components of the distribution are: Sample Trace Data Files & Executables 8Mb Sound System Sample Libraries 16 Sound System Source & Binaries 8 Visualization Source & Binaries, (including widgets and motif wrapper classes) 16 Instrumentation Environment - Source & Binaries 5 ------------------------------------------------------------------------------ Subdirectories: DataFiles - directory structure containing data files. See DataFiles/README for more details SDDFlibrary - directory structure containging source for standalone SDDF library and sample programs that use the library. See SDDFlibrary/README for more details. Instrument - the Pablo Instrumentation system. See Instrument/README for more details. Visual - the Pablo Visualization system. See Visual/README for further information on the organization of the source tree for visualization system. ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: blacey@cerf.net (Bruce B. Lacey) Newsgroups: comp.parallel,comp.protocols.tcp-ip,comp.unix.programmer,comp.unix.questions Subject: Availability of a Distributed Computing Environment API? We are in the process of developing an image processing workbench that will have services distributed amongst multiple UNIX machines. To support this distributed computing environment (DCE) paradigm (multiple clients, multiple servers), we need a priority based message passing mechanism that operates using UNIX sockets. While we can develop our own, I am sure that we are not alone in this need. I was wondering if any fellow netters know of a commercial or preferably shareware package that provides this sort of message passing mechanism over UNIX sockets. Please e-mail responses to blacey@cerf.net Thanks in advance, Bruce B. Lacey +====================================================================+ | Bruce B. Lacey Technical Group Manager of Systems Engnineering | | XonTech, Inc. blacey@cerf.net | | Van Nuys, CA 91406 (818) 787-7380 ext. 295 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ | Specializing in massively parallel computing architectures. | +====================================================================+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lorie@antigone.cs.rice.edu (Lorie Liebrock) Subject: composite or multi-grid problems Organization: Rice University I am working on algorithms for automating the distribution of ICRM (Irregularly Coupled Regular Mesh) problems across parallel processors in Fortran D. Other names for ICRM problems are composite grid and multi-block. ICRM problems typically involve the simulation of material dynamics in or around complex topology bodies. For example, one of the areas I am interested in is aerodynamics simulations where each mesh represents some component (e.g., body, wing, pylon, etc.) and the couplings represent the seams or connections between the parts. In the aerodynamics case the flow of air over the components may be the phenomenon of interest. I am interested in any and all applications with such connected components. My work is also intended to support ICRM problems for which the grids have been generated automatically. I am looking for a few test problems that I can use in validation of my algorithms. I am also looking for researchers with ICRM problems that would be willing to discuss their applications and programs so that we can continue improving support for these problems in Fortran D. Please send me any comments and let me know if you or someone you know has applications in this class. Lorie M. Liebrock lorie@cs.rice.edu Ph.D. Candidate Computer Science Rice University Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wire@SSD.intel.com (Wire Moore) Subject: Re: Information wanted on Warp and iWarp Reply-To: wire@SSD.intel.com (Wire Moore) Organization: Intel Supercomputer Systems, Beaverton, Oregon, USA References: <93-10-031@comp.compilers> <1993Oct15.121107.29626@hubcap.clemson.edu> RE: Information on Warp and iWarp: CMU's iWarp papers are available by anonymous FTP on puffin.warp.cs.cmu.edu in the directory iwarp-papers. Contact Thomas Stricker at CMU (Thomas.Stricker@cs.cmu.edu) for more information on Warp. I have access to a small number of architecture specifications (the architecture handbook) for the iWarp component that can be made available by request to me. ------------------------------------------------------------------------------- Wire Moore phone: (503) 629-6333 Intel Supercomputers fax: (503) 629-6367 15201 NW Greenbriar Parkway email: wire@ssd.intel.com Beaverton, OR 97006 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: conte%oprah.ece.scarolina.edu@usceast.cs.scarolina.edu (Tom Conte) Subject: Position Announcement: NCR Endowed Chair Organization: ECE Department, University of South Carolina UNIVERSITY OF SOUTH CAROLINA NCR Endowed Chair in Computer Engineering The University of South Carolina Department of Electrical and Computer Engineering seeks applications for the NCR Endowed Chair position in Computer Engineering at the rank of Professor. Applicants must have demonstrated research success in computer architecture, software systems or a related field. The selected individual will be expected to work closely with the department's faculty and NCR (an AT&T Company). The relationship between NCR and the department is strong, with NCR supporting several research efforts. The ECE Department is housed in the $24 million Swearingen Engineering Center, which contains laboratories for mobile robotics, computational electronics, millimeter-wave integrated circuits, VLSI CAD, artificial intelligence, computer architecture, parallel processing, software engineering, high voltage, pulsed power, and laser systems. The department offers the B.S., M.E., M.S., and Ph.D. degrees. Interested individuals should submit a resume, a publication bibliography, names of five references, and three selected publications by March 1, 1994 to: NCR Chair Search Committee Department of Electrical and Computer Engineering University of South Carolina Columbia, SC 29208 The University of South Carolina is an equal opportunity and affirmative action employer. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 25 Oct 1993 14:31:56 -0500 From: David Kanecki Subject: Simulaton for Emergency Management (repost) As chairperson of the Emergency Management Committee for The Socie- ty for Computer Simulation, I would like to ask for your help in inform- ing others about the Emergency Management Conference and to submit a paper or abstract to the conference. Papers and abstracts can be submit- ted until December 20th. To submit a paper or abstract, please sent it to: Simulation for Emergency Management c/o SMC '94 P.O. Box 17900 San Diego, CA 92177 Phone: (619)-277-3888, Fax: (619)-277-3930 Currently, we have received four papers from colleagues in the industrial, government, and academic areas. Also, if you would like to volunteer, please contact me, kanecki@cs.uwp.edu, or SCS in San Diego. Other conferences that are being held during the 1994 SCS Simula- tion Multiconference, SMC '94, April 11-15, 1994, Hyatt Regency Aventine - La Jolla, San Diego, California are Simulators International; High Performance Computing Symposium; Visualization, Validation, & Verifica- tion of Computer Simulations; Mission Earth; Military, Government, and Aerospace Simulation; and 27th Annual Simulation Symposium. Prior conferences on Emergency Management were held in 1983, 1985, 1987, 1989, 1991, 1992, and 1993. All of these conferences were sponsored by SCS. To show the diversity of topics and interest of Emergency Manage- ment, I have compiled a list of authors and titles of papers from the 1993 and 1992 conferences: 1993 Conference 1. "A Report Card on the Effectiveness of Emergency Management and Engineering", James D. Sullivan, CCP, CDP, CSP. 2. "A New Cooperative Program for the Development of Advanced Technology for Emergency Preparedness", Robert J. Crowley, P.E. 3. "Simulation in Training and Exercises for Emergency Response", Lois Clack McCoy. 4. "Fatal Hazardous Materials and Accident Statistics", Theodore S. Glickman, Dominic Golding, Karen S. Terry, Frederick W. Talcott. 5. "A Risk Analytic Approach to Contingency Planning using Expert Judgement in Simulated Scenarios", John R. Harrald, Thomas Mazzuchi. 6. "Emergency Response and Operational Risk Management", Giampiero E.G. Beroggi, William A. Wallace. 7. "Damage Analysis of Water Distribution Systems using GIS", Matthew J. Cassaro, Sridhar Kamojjala, N.R. Bhaskar, R.K. Ragade, M.A. Cassaro. 8. "Physical Damage and Human Loss: Simulation of the Economic Impact of Earthquake Mitigation Measures", Frederick Krimgold, Jayant Khadilkar, Robert Kilcup. 9. "Utah Equip: A Comprehensive Earthquake Loss Prediction Model for the Wasatch Fault", Robert Wilson, Christopher Rojahn, Dr. Roger Scholl, Barbara Skiffington, Terry Cocozza. 10. "Geographic Information System (GIS) Application in Emergency Management", Donald E. Newsom, Ph.D., P.E., Jacques E. Mitrani. 11. "Smart Spatial Information Systems and Disaster Management: GIS in the Space Age", A.M.G. Jarman, Ph.D. 12. "An Evacuation Simulation for Underground Mining", Richard L. Unger, Audrey F. Glowacki, Robert R. Stein. 13. "Importance of Rems in the Aftermath of Hurricane Andrew", Suleyman Tufekci, Sandesh J. Jagdev, Abdulatef Albirsairi. 14. "Optimal Routing in State Dependent Evacuation Networks", David L. Bakuli, J. MacGregor Smith. 15. "Evacuation Models and Objectives", Gunnar G. Lovas, Jo Wik- lund, K. Harrald Drager. 16. "Xpent, Slope, Stability Expert System for Managing the Risk", R.M. Faure, Ph.D., D. Mascarelli, Ph.D. 17. "Mapping of Forest Units which have a Protective Function against Natural Hazards. An Application of Geographical Information Systems in France", Frederic Berger. 18. "Artificial Intelligence and Local Avalanche Forecasting: The System 'AVALOG' ", Robert Belognesi. 19. "Simulations in Debris Flow", Fabrice Moutte. 20. "Planning and Controlling of General Repair in a Nuclear Power Plant", Majdandzic N. and Dobrila Damjonovic-Zivic. 21. "Spatial Decision Support Systems for Emergency Planning: An Operational Research/ Geographical Information Systems Approach to Evacuation Planning", F. Nishakumari de Silva, Michael Pidd, Roger Eglese. 22. "Online Expert Systems for Monitoring Nuclear Power Plant Accidents", M. Parker, F. Niziolek, J. Brittin. 23. "Nuclear Power Reactor Accident Monitoring", M. Parker, P.E. 24. "An Expert System for Monitoring the Zion Nuclear Power Station the DNS Early Warning Program", Joseph L. Brittin, Frank Niziolek. 25. "Fire Spread Computer Simulation of Urban Conflagagrations", P. Bryant, G.R. Doenges, W.B. Samuels, S.B. Martin, A.B. Willoughby. 26. "Practical Applications of Virtual Reality to Firefighter Training", Randall Egsegian, Ken Pittman, Ken Farmer, Rick Zobel. 27. "Difficulties in the Simulation of Wildfires", James H. Brad- ley, A. Ben Clymer. 28. " 'Snow and Computer' A Survey of Applications for Snow Hazards Protection in France", Laurent Buisson, Gilles Borrel. 29. "Mem-brain, Decision Support Integration-Platform for Major Emergency Management (MEM)", Yaron Shavit. 30. "Architecture of a Decision Support System for Forest Fire Prevention and Fighting", Jean-Luc Wybo, Erick Meunier. 31. "Optimizing Comprehensive Emergency Mitigation and Response through the use of Automation (Panel Discussion)", Lois Clark McCoy, David McMillion. 32. "Applying a Geographical Information System to Disaster Epide- miologic Research: Hurricane Andrew, Florida 1992", Josephine Malilay, Lynn Quenemoen. 33. "An Effective Method of Extracting a Localized Storm History from a Database of Tracks", Eric C. Dutton, Ronald S. Reagan. 34. "Periodic Poisson Process for Hurricane Disaster Contingency Planning", Ronald S. Reagan. 35. "Estimation, Optimization, and Control in Rural Emergency Medical Service (EMS) Systems", Cecil D. Burge, Ph.D., P.E. 36. "Visions for a Networked System of Emergency Vehicle Training Simulators", Gregory J. Bookout. 37. "The FEMA ROCS Model", Henry S. Liers, Ph.D. 38. "Computers bring Crisis Simulations to Life using Computer Maps, Graphics, and Databases to Re-Define and Maximize the Effective- ness of "Tabletop" Crisis Simulations", James W. Morentz, Ph.D., Lois Clark McCoy, Joseph Appelbaum, David Griffith. 39. "The Use of Computer Simulations for Consequence Analysis of Toxic Chemical Releases", E.D. Chikhliwala, M. Oliver, S. Kothandarman. 40. "Didactic Simulation (Syndicate Exercise) for Disaster Manage- ment", Dr. S. Ramini. 41. "Its Just one Damn' Crisis After Another...", S.F. Blinkhorn, M.A., Ph.D. 42. "AEDR, American Engineers for Disaster Relief Database, Spread- sheet and Wordprocessing Applications", James S. Cohen. 43. "Geophysical Uncertainties Affecting Emergencies", James H. Bradley, Ph.D. 44. "Consultation on Simulation for Emergency Preparedness (COSEP) User/Developer Roundtable Discussion Session", Lois Clark McCoy, Donald E. Newsom, Jacques Mitrani. 1992 Proceedings 1. "Are We Winning the War Against Emergencies", James D. Sullivan, CCP, CDP, CSP. 2. "Simulation of Suburban Area Fires", A. Ben Clymer. 3. "Expertgraph: Knowledge Based Analysis and Real Time Monitoring of Spatial Data Application to Forest Fire Prevention in French Riviera", Jean Luc Wybo. 4. "Simulation in Support of The Chemical Stockpile Emergency Preparedness Program (CSEPP)", Robert T. Jaske, P.E., Madhu Beriwal. 5. "Modeling Protective Action Decisions for Chemical Weapons Accidents", John H. Sorensen, George O. Rogers, Michael J. Meador. 6. "Simulation Meets Reality - Chemical Hazard Models in Real World Use", Donald E. Newsom, Ph.D., P.E. 7. "Managing the Risk of a Large Marine Chemical Spill", Marc B. Wilson, John R. Harrald. 8. "Simclone - A Simulated Cyclone - Some User Experiences and Problems", Dr. S. Ramani. 9. "Simulation of Coastal Flooding Caused by Hurricanes and Winter Storms", Y.J. Tsai. 10. "Simulation of Environmental Hazards in a Geographic Informa- tion System: A Transboundary Urban Example from the Texas/Mexico Border- lands", Thomas M. Woodfin. 11. "Simulation and Protection in Avalanche Control", Laurent Buisson. 12. "Natural Disasters, Space Technology, and the Development of Expert Systems: Some Recent Developments in Australian National River Basin Planning and Management", A.M.G. Jarman, Ph.D. 13. "Characterizing Mine Emergency Response Skills: A Team Approach to Knowledge Acquisition", Launa Mallet, Charles Vaught. 14. "Using Simulation to Prepare for Emergency Mine Fire Evacua- tion", Audrey F. Glowacki. 15. "Dymod: Towards Real Time, Dynamic Traffic Routing During Mass Evacuations", Frank Southworth, Bruce N. Janson, Mohan M. Venigalla. 16. "A Tutorial on Modeling Emergency Evacuation", Thomas Kisko, Suleyman Tufekci. 17. "REMS: A Regional Evacuation Decision Support System", Thomas Kisko, Suleyman Tufekci. 18. "EVACSIM: A Comprehensive Evacuation Simulation Tool", K. Harrald Drager, Gunnar Lovas, Jo Wiklind, Helge Soma, Duc Duong, Anne Violas, Veronique Laneres. 19. "The Prediction of Time-Dependent Population Distributions", George Banz. 20. "Mulit-Objective Routing in Stochastic Evacuation Networks", J. MacGregor Smith. 21. "Earthquake Impact Projections Expert System Application", Barbara Skiffington, Robert Wilson. 22. "An Economic Profile of a Regional Economy Based on an Implan Derived Database", E. Lawrence Salkin. 23. "Energy Corrected Simulation Accelerograms for Non-Linear Structures", Darush Davani, Michael P. Gaus. 24. "Stimulating the Planning Progress Through Computer Simulation", Salvatore Belardo, John R. Harald. 25. "Simulation Methods in Utility Level Nuclear Power Plant Emer- gency Exercises", Klaus Sjoblom. 26. "Computer Simulation of Industrial Base Capacity to Meet Na- tional Security Requirements", Mile Austin. 27. "The FEMA Emergency Management Assessment System", Robert Wilson. 28. "The Missing Data Base: Under-Automation in Disaster Response & Planning", Lois Clark McCoy. 29. "The Environmental Education: Need for All", M. Abdul Majeed. === Call For Papers === SIMULATION FOR EMERGENCY MANAGEMENT Sponsored by The Society for Computer Simulation, SCS April 11-15, 1994 La Jolla - California Part of the SCS 1994 Simulation Multiconference A special topic area of the SMC '94, sponsored by the Emergency Manage- ment Engineering Technical Activity Committee of the SCS brings users, planners, researchers, managers, technicians, response personnel, and other interested parties to learn, teach, present, share, and exchange ideas and information about how, when, where, and why computer simula- tion and related tools can be used to avoid, mitigate, and recover from disasters and other emergencies. Topics Natural Disasters Hurricanes and Tornadoes, Floods, Earthquakes, Volcanic Activity, Outdoor fires, snow and debris avalanches. Man-made Disasters Oil, Chemical and Nuclear spills, Nuclear and Chemical plant acci- dents, building fires, Communication systems failures, Utility failures. Techniques Training and Simulators, AI and Expert systems, Global information systems, Risk Analysis, Operations Research, Simulation, Effectiveness analysis, Cost and Damage analysis. Specific Applications Evacuation, Research on Emergency Management or Engineering, Emer- gency Control Search and Rescue. Presentations, demonstrations and exhibits concerning any and all areas of simulation and modeling (as well as related technologies) including safety, emergency management and planning, forensic technology, design, response, user experience and problems and case studies are appropriate to be presented. Papers or abstracts can be submitted until late December to: Simulation for Emergency Management c/o SMC '94 P.O. Box 17900 San Diego, CA 92177 Phone (619)-277-3888, Fax (619)-277-3930 Other Conferences and activities being held as part of SMC '94 Simulators International, High Performance Computing Symposium, Visualization, Validation & Verification of Computer Simulation, Mission Earth, Military, Government, and Aerospace Simulation, 27th Annual Simulation Symposium, Professional Development Seminars, and Exhibits. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: Re: The Future of Parallel Computing Summary: well I wonder if this will post?..... Organization: NAS - NASA Ames Research Center, Moffett Field, CA References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> >In article <1993Oct14.131912.9126@hubcap.clemson.edu> Peter Su writes: >>Or, do I, for the purposes of benchmarking, have to regard the >>vendor's brain-dead compiler as part of the system. Aren't we trying >>to figure out how good the *hardware* is, not the hardware+compiler? In article <1993Oct19.154929.15823@hubcap.clemson.edu> dbader@eng.umd.edu (David Bader) writes: > This brings up an interested question about the direction that the >field of parallel computing is headed towards. > > That is, is the hardware growing too fast and neglecting the >software and algorithmic concerns? The history of hardware has always exceeded the software. Read one way, an implication would be to slow down hardware development. You would not want that now would you? I have a hard time seeing how we are going to catch up algorithmically. Have we many better algorithms than say Euclid's algorithm? It's not like we can undertake National initiatives to make some of this stuff go faster. The problem with computer architecture is that it is still in its infancy. We have barely explored the multiprocessor design space. We are far from any sort of "standard" architecture. We have specialized architectures proposed by people who have more ways to connect processors than there exist applications for them. >Let's face >it, after we create a large application, we do not want to have to >rewrite it every time we get the "latest" machine, or even the next >generation of our current series of machine. > > Is the same true for the emergence of parallel computing? In my >opinion, no. We have not ground out a "standard" representation for >the model of parallel computing. We have not put enough effort into >the theory of parallel algorithmics. Throwing faster hardware at us >will not solve the problem. Even if the benchmark time for a given >application is cut in half, what happens as we try to increase the >problem size by a factor of K ? The compiler then must have the task >of decomposing the algorithm onto the underlying hardware. It is just >wrong to require the programmer to have a detailed knowledge of the >hardware, data layout, and compiler tricks just to get anywhere near >"benchmarked" performance rates. Mixed blessing; nice generality. Should we throw slower hardware? > We are now in an age when the high performance machines have >various data network topologies, i.e. meshes, torii, linear arrays, >vector processors, hypercubes, fat-trees, switching networks, etc.. >etc.. These parallel machines might all have sexy architectures, but >we are headed in the wrong direction if we don't take a step back and >look at the future of our work. We shouldn't have to rewrite our >algorithms from scratch each time our vendor sells us the latest >hardware with amazing benchmarks. Benchmarks should also be >attainable from STANDARD compiler options. We should NOT have to >streamline routines in assembly language, give data layout directives, >nor understand the complexities of the hardware and/or data network. > > Please let me know what you think, I don't believe that many use assembly language these days, but we have communication passing equivalents. We will completely stop streamlining with assembly language just before we stop using BibTeX or refer for bibliographic citations. --eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov Resident Cynic, Rock of Ages Home for Retired Hackers {uunet,mailrus,other gateways}!ames!eugene Second Favorite email message: 550 Host unknown (Authoritative answer from name server): Address family not supported by protocol family A Ref: Mathematics and Plausible Reasoning, vol. 1, G. Polya Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: saevans@icaen.uiowa.edu (Scott A Evans) Subject: routing benchmarks wanted Sender: news@nexus.uiowa.edu (News) Date: Mon, 25 Oct 1993 20:11:01 GMT Nntp-Posting-Host: l_cae13.icaen.uiowa.edu Organization: Iowa Computer Aided Engineering Network, University of Iowa Apparently-To: comp-parallel@uunet.uu.net I'm looking for pointers to benchmarks for VLSI routing schemes. Any pointers or help would be appreciated. Thanks! Scott -- Scott A Evans /0/ __O/ (saevans@icaen.uiowa.edu) |TTTTTTTTTTTTTT\\T| Grad Student For Life |IIIIIIIIIIIIIII\\| Sparks and Magic forever! | / \ / \ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: urban@CS.ColoState.EDU (david urban) Subject: Re: terminology question Sender: news@yuma.ACNS.ColoState.EDU (News Account) Message-ID: Date: Mon, 25 Oct 1993 21:36:41 GMT References: <1993Oct20.121916.28264@hubcap.clemson.edu> Nntp-Posting-Host: beethoven.cs.colostate.edu Organization: Colorado State University, Computer Science Department In article <1993Oct20.121916.28264@hubcap.clemson.edu> Greg.Wilson@cs.anu.edu.au (Greg Wilson (EXP 31 dec 93)) writes: >I have always used the term "star" to refer to a topology in which every processor >is connected to every other; however, I am told that the term is also used for >topologies in which processors 1..N are connected to a distinguished central >processor 0. Assuming that the latter definition is more common, is there a term >for topologies of the former type? In the parallel text we use, they refer to it as a fully connected network. David S. Urban -- David S. Urban email : urban@cs.colostate.edu To be the person, you must know the person. To know the person, you must understand the person. To understand the person, you must listen. To listen, you must open your mind and put aside all preconceived ideas and notions. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ielavkow@techunix.technion.ac.il (Levkovitz Roni) Subject: Re: Ordering in sparse Cholesky factorization Organization: Technion, Israel Institute of Technology References: <1993Oct19.154910.15684@hubcap.clemson.edu> bharat kumar (kumar-b@cis.ohio-state.edu) wrote: : I'm looking for papers on ordering of symmetric positive definite matrices to : minimize fill-in and maximize parallelism, and the mapping of computation to : processors. Try to look at papers by George, Liu, Ashcroft, Heath, Ng etc... For example : Alan George, Michael Heath, Joseph Liu, Esmond MG, Solution of sparse positive definite matrices on a hypercube, Journal of Computational and Applied Mathematics 27 (1989), pp 183-209. You can find most of the relevant references in the bibliography directory of Netlib in the matrix and parallel computation bib files. Roni Levkovitz ielavkow@techunix.technion.ac.il Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ielavkow@techunix.technion.ac.il (Levkovitz Roni) Subject: Re: terminology question Message-ID: Organization: Technion, Israel Institute of Technology References: <1993Oct20.121916.28264@hubcap.clemson.edu> Greg Wilson (EXP 31 dec 93) (Greg.Wilson@cs.anu.edu.au) wrote: : I have always used the term "star" to refer to a topology in which every processor : is connected to every other; however, I am told that the term is also used for : topologies in which processors 1..N are connected to a distinguished central : processor 0. Assuming that the latter definition is more common, is there a term : for topologies of the former type? : Thanks, : Greg Wilson As far as I know, star topology is the later one. The first topology is sometimes refered to as full connectivity network but usually we call it clique topology Roni Levkovitz Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 26 Oct 93 12:25:52 GMT From: Douglas Hanley Subject: Paralation This query concerns itself with Gary Sabot's Paralation Model of Parallel Computation. Currently I am attempting to implement the Match and Move operators of this programming model across a MIMD architecture in order access its performance characteristics (to date, I think it has only been implemented on SIMD and Vector parallel platforms). I intend to use C as the base language to be extended with Match & Move. I know that a language called Paralation C was being experimented with in 1988 and I would like to hear from anybody who knows where I can get further information on it's current state or has personal experience of it. In addition I would be grateful to hear from anybody who has any experience or information regarding attempts to implement this programming model in any base language on MIMD platforms. Thanks in advance for any help... Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: " (Guido Haechler)" Subject: DEADLINE CONTEST: HOW TO WIN A SWISS TOBLERONE CHOCOLATE! Organization: Institut fuer Informatik DEADLINE OF THE CONTEST: HOW TO WIN A SWISS TOBLERONE CHOCOLATE! Dear participants of the tripuzzle contest. The contest started very slow (not concerning execution time, but concerning the number of entries). In the meantime there have been a reasonable amount of peoples joining the contest. We decided to fix the deadline of this contest for: (5. Nov. 93) 555555 N N 999 333 5 NN N 9 9 3 3 5 N N N OOO V V 9 9 3 55555 N N N O O V V 9999 33 5 N N N O O V V 9 3 5 .. N NN O O V V .. 9 3 3 55555 .. N N OOO V .. 999 333 After this date we will post a final ranking and send the proposed toblerone chocolate to the winner(s). Guido & Stephan PS: please send mail from now on to haechler@ifi.unibas.ch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sims@ucrengr.ucr.edu (david sims) Subject: Simulating coherency protocols in a DSM system Organization: University of California, Riverside (College of Engineering/Computer Science) hi all, I have some coherency protocols for a distributed shared memory (DSM) system that I want to simulate. The protocols exhibit sequential consistency, weak consistency, release consistency, etc. Is there some kind of testbed or simulation package that I could use to simulate these protocols under varying conditions? thanks for any information -- David L. Sims Department of Computer Science sims@cs.ucr.edu University of California +1 (909) 787-6437 Riverside CA 92521-0304 PGP encryption key available on request. USA Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Patrick F. McGehearty Subject: Re: The Future of Parallel Computing Reply-To: patrick@convex.COM (Patrick F. McGehearty) References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> In article <1993Oct19.154929.15823@hubcap.clemson.edu> dbader@eng.umd.edu (David Bader) writes: ... > We are now in an age when the high performance machines have >various data network topologies, i.e. meshes, torii, linear arrays, >vector processors, hypercubes, fat-trees, switching networks, etc.. >etc.. These parallel machines might all have sexy architectures, but >we are headed in the wrong direction if we don't take a step back and >look at the future of our work. We shouldn't have to rewrite our >algorithms from scratch each time our vendor sells us the latest >hardware with amazing benchmarks. Benchmarks should also be >attainable from STANDARD compiler options. We should NOT have to >streamline routines in assembly language, give data layout directives, >nor understand the complexities of the hardware and/or data network. > > Please let me know what you think, I both agree and disagree with your comments. To break out of a narrow niche, high performance machines must be easy to use. Otherwise, only the most dedicated will attempt to benefit from them. Portability of existing applications is critical for broad success. However, there will always be those who have a need for absolute top end performance in limited applications. For them, the limited cost of tuning one or two applications to unique architectures will be justifiable for the dramatic increases in performance. I even argue that the rest of high performance computing ultimately benefit from these specialized efforts. Without them, vector computers would never have been built in the first place, and now vector computers are almost as easy to use as conventional computers. Finally, anyone at all concerned about performance needs to have some understanding of factors affecting performance. This knowledge includes such mundane details as the value of caches and stride one memory access vs non-stride one memory access. See "High Performance Computing" by Kevin Dowd [ISBN 1-566592-032-5] for a good intro on the subject. Parallel architectures will have new issues as compared to workstations or vector machines, but as they become more widespread, we will gain a body of knowledge about what works and what to avoid. While some things will be machine specific, others will apply across a range of architectures. - Patrick McGehearty patrick@convex.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 26 Oct 93 07:29:22 PDT From: hecht@oregon.cray.com Subject: CS 6400 NEW CRAY RESEARCH SUPERSERVERS COMBINE STRONG PRICE-PERFORMANCE, RAS FEATURES, LARGE NUMBER OF APPLICATIONS FOR COMMERCIAL AND TECHNICAL DATA CENTERS With up To 64 Processors, CS6400 Products Are World's Fastest SPARC/Solaris Systems SAN FRANCISCO, Oct. 25, 1993 -- Cray Research Superservers (CRS), the Ore.-based subsidiary of Cray Research, Inc. (NYSE:CYR), today announced a new Superserver product line. Martin Buchanan, general manager of CRS, said the new CRAY SUPERSERVER 6400 (CS6400) systems, with up to 64 processors, are the world's fastest and most expandable SPARC/Solaris-compliant systems. "Our's are the first high-end servers to combine the strong price-performance of Sun products with the reliability, availability and serviceability (RAS) features data center managers expect," Buchanan said. "The CS6400 products are enterprise servers aimed at the rightsizing' market, especially commercial and technical data centers concerned about the high cost of upgrading and running mainframe systems," he said. The products were developed by CRS under a Jan. 1992 technology agreement between Cray Research and Sun Microsystems, Inc., Mountain View, Calif., and are a binary-compatible upward extension of Sun's product line. "The SPARC architecture is a powerful foundation for cost- effective scalability," said Scott McNealy, Sun chairman and chief executive officer. "The Sun and CRS product lines will form a price-performance continuum from SPARCclassics to SPARCcenter servers to the new CS6400 products. Sun users who require more power on the network can move to the binary- compatible CS6400 systems with no migration problems." The CS6400 systems run the current version of Sun's Solaris operating environment, which is an implementation of UNIX System V Release 4, he said. "Any program that runs on a Sun system will run on the new CRS systems without modification, and vice versa." "With the CS6400, CRS will address commercial markets with non-traditional Cray applications," said Lester Davis, Cray Research chief operating officer. "That's why CRS is organized as a separate business unit with its own hardware and software development capabilities. Martin Buchanan has assembled a strong marketing and sales team with many years of experience selling into commercial markets, as well as into the technical computing arena. We're confident that the enterprise server market is ready to move to Cray Research added value, especially when this is available on the same price-performance curve as Sun products." Buchanan said CS6400 systems are expandable and can scale with customers' data processing needs. The systems are offered with four to 64 SuperSPARC RISC microprocessors (initially at 60 MHz), 256 megabytes (million bytes) to 16 gigabytes (billion bytes) of central memory, 1.3 gigabytes per second peak memory bandwidth, and more than two terabytes (trillion bytes) of online disk storage. U.S. pricing begins at under $400,000 for the four-processor version, and at $2.5 million for the top-of-the-line 64-processor system. He said CRS expects to sell hundreds of the new systems. Initial shipments of the new systems will begin in late 1993, Buchanan said. Volume shipments are scheduled to begin in first-quarter 1994. CRS is working with customers and prospects in the traditional Sun and Cray Research markets, as well as new markets. CRS is in negotiations with several organizations in the electronic computer-aided design (ECAD), transportation/distribution, manufacturing, university and electric utility markets and will announce these customers when order agreements are signed, Buchanan said. Separately today, CRS announced that SICAN, a leading German microelectronics firm, has ordered a CS6400 system. SICAN is scheduled to receive a 48-processor system by mid-1994. According to Buchanan, the large number of applications available on SPARC/Solaris systems was an important attraction for CRS. "We are leveraging SPARC's leadership in the RISC market through binary compatibility with Sun's product line. As we work with independent software vendors to have their products supported, technical ports are not an issue." Buchanan cited a 1992 International Data Corporation study showing SPARC with a 57 percent marketshare for the UNIX RISC market. Buchanan said CRS is in discussions with several major developers of key software packages and connectivity products, including relational database management systems (RDBMS), transaction processing monitors, fourth generation languages, computer-aided software engineering (CASE) tools, report generation packages, hierarchical storage management (HSM) solutions and mainframe connectivity tools. He said CRS recently benchmarked Oracle, the most popular database server, on an early version CS6400 system. "On the benchmark, which simulated 500 users accessing, updating and querying data, the CS6400 performed comparable to a mid-range mainframe system, which would cost more than $5 million, five times that of the 16-processor CS6400 system benchmarked," said Darshan Karki, president of SuperSolutions Corp., Minneapolis, a firm that re-engineers enterprise-wide applications and system software and assisted CRS with the recent Oracle benchmark. Initial target markets for the new systems are: ECAD, financial service and investment banking, general engineering, government, petroleum, and telecommunications. According to A.J. Berkeley, CRS senior director of sales and marketing, CRS will market and sell the new products through its own dedicated international sales force, the Cray Research sales force, a global network of systems integrators and value- added resellers (VARs), and joint initiatives with Sun. "Sun has been extremely helpful in pointing us toward some of the right parties and we've supplemented this with our own contacts. We expect to announce key agreements later this year," he said According to Buchanan, "Our relationship with Sun goes beyond merely licensing their technology. We are working closely with several Sun business units on interoperability and general hardware and software engineering for future systems." He said the CS6400 systems bring together Sun hardware and software technology with Cray Research value-added features for high performance and system reliability and serviceability. "The CS6400 delivers features to the open systems environment that the data center users have enjoyed for years," Buchanan said. "The new system has built-in reliability, availability and serviceability features -- a first in the high-end, open systems server arena." For example, should a component fail, the system automatically reboots, isolates the fault, and reconfigures itself. Uptime is further maximized with "hot swap" capabilities, which allow a failed module to be removed and replaced in the system while it's still running, Buchanan said. Upgrades can also be done while the system is online. An independent service processor performs online and remote diagnostics, logging, and monitoring functions and data is protected through features such as disk mirroring, page isolation, and memory scrubbing, he said. "There are many exciting aspects about CRS and its Superserver," said Jim Johnson, chairman of The Standish Group, a market research firm based in South Yarmouth, Mass. "Most important is the availability of Cray's high-end technology while following a pricing strategy similar to Sun's. The key data center management products...will give the kind of performance and quality that mission-critical applications require. The CRS Superserver will have some of the features that data center managers take for granted that are not in today's UNIX servers. These are essential features and people will look very favorably on them at the kind of price and performance CRS offers." Buchanan said Cray Research was the first high-performance computing company to embrace the UNIX standard. "Over the past decade, Cray Research has substantially enhanced UNICOS, the company's 64-bit symmetric multiprocessing implementation of the UNIX operating system. Many key features of Cray's supercomputing environment -- such as sophisticated tape management, networked batch processing, systems management software, program debugging tools and high-performance compilers -- will also be important for commercial and technical users of the CS6400 systems and will be available in 1994." CRS also announced today that: o CRS has signed a memorandum of understanding with Sun Microsystems Computer Corporation for SunIntegration Services to become a reseller of the CS6400 system; o Electricite' de France, the world's largest electrical utility, Clamart, France, will be an early customer for CRS' new CS6400 system; o CRS has signed a memorandum of understanding with Oracle to make Oracle7 available on the new CS6400 systems; o CRS has signed a memorandum of understanding with the ASK Group, developers of the ASK INGRES Intelligent Database system, to make the INGRES database product available on the CS6400 system; o INFORMIX-OnLine will be available on the CS6400 systems; o Sybase, Inc.'s support for Cray Research's high-end SPARC/Solaris-compatible strategy; o CRS has signed an agreement with Brixton Systems, Inc., Cambridge, Mass., to make available on the CS6400 system Brixton's suite of connectivity software, which links IBM mainframes with open systems computers, enabling data to be shared between these systems; o CRS and Information Management Company (IMC) have signed an agreement for IMC to make Open TransPort for MVS and TUXEDO transaction processing system available on the CS6400 system; o CRS and T-Mass GmbH have signed an agreement for T- Mass to support and distributed UniTree on the CS6400 systems; and o CRS has signed an agreement with JYACC, Inc. to provide its JAM Version 6 Application Development Toolset on the CS6400. CRS is dedicated to creating the world's leading SPARC/Solaris-compliant computer systems. Cray Research creates the most powerful, highest-quality computational tools for solving the world's most challenging scientific and industrial problems. ### -- -- Conrad Anderson Employee Communications (612) 683-7338 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sevans@news.weeg.uiowa.edu (Scott A Evans) Message-ID: <1993Oct26.145208.25359@news.weeg.uiowa.edu> Organization: University of Iowa, Iowa City, IA, USA I'm looking for some benchmark information on parallel routing algorithms. Does anyone have some pointers that they could give me to lead me in the right direction? Thanks! Scott Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: icw@ecs.soton.ac.uk (I C Wolton) Subject: RAPS Workshop Date: 27 Oct 93 11:04:12 GMT Organization: Electronics and Computer Science, University of Southampton I posted this before but it doesn't seem to have got through so here goes again .... RAPS Open Workshop on Parallel Benchmarks and Programming Models Chilworth Manor Conference Centre, Southampton, UK 7-8 Dec 1993 Workshop Overview ----------------- This workshop will review recent developments in programing models for parallel applications, outline the features of some of the RAPS parallel benchmarks and present some complementary international initiatives. The final session will give the vendors an opportunity to present the results of the RAPS benchmarks on their latest machines. The RAPS Consortium ------------------- The RAPS consortium was put together to promote the creation of benchmarks for important production applications on massively-parallel computers (RAPS stands for Real Applications on Parallel Systems). As part of this activity it has a strong interest in adopting a programming model that can provide portability without excessive sacrifice of performance. The consortium consists of a number of users and developers of significant large production codes running on supercomputers. It is supported by a Consultative Forum of computer manufacturers which currently includes Convex, Cray, Fujitsu, IBM, Intel and Meiko. Codes being worked on for the RAPS benchmark suite include: PAM-CRASH - a finite element code mainly used for car crash simulations IFS/ARPEGE - a global atmospheric simulation code used for meteorology and climatology FIRE - a fluid flow code used for automotive flow simulations GEANT - used by CERN to simulate the interaction of high-energy particle showers with detectors Provisional Programme --------------------- The workshop will be held over two days, starting after lunch on Tuesday 7th December and finishing at lunchtime on Wednesday 8th December. Lunch will be available on both days. Tuesday 7 Dec, Afternoon Current status of RAPS Karl Solchenbach, PALLAS ESPRIT Application porting activities Adrian Colebrook, Smith, Guildford The Proposed Message Passing Interface Standard (MPI) Ian Glendinning, University of Southampton Distributed Fortran Compiler Techniques Thomas Brandes, GMD Impact of Cache on Data Distribution Richard Reuter, IBM Heidelberg Workshop Dinner Wed 8 Dec, Morning The PEPS Benchmarking Methodolgy Ed Brocklehurst, National Physical Laboratory The PARKBENCH Initiative Tony Hey, University of Southampton The IFS spectral model: the 3D version with some preliminary results David Dent, ECMWF Vendor's Presentation of Results for the RAPS Benchmarks Registration Details -------------------- The registration fee is 120 pounds sterling, including lunch and refreshments. An optional workshop dinner is being arranged at 25 pounds per head. Accomodation is available at Chilworth Manor for 54.50 pounds per night. Cheques should be made payable to "University of Southampton" Bookings and enquiries to: Chris Collier Electronics & Computer Science Highfield University of Southampton Southampton S09 5NH Tel: 0703 592069 Fax: 0703 593045 Email: cdc@ecs.soton.ac.uk This form should be returned to the conference organiser, Chris Collier. Name ....................................................... Organisation ............................................... Address .................................................... .................................................... Telephone .................................................. Email ...................................................... Special Dietary Requirements ................................ ............................................................. Registration (Price in Pounds Sterling) : 50.00 I would like accomodation at 52.50 pounds per night for the nights of ............................................................. I would like to attend the workshop dinner at 25 pounds ...... Yes/No TOTAL FEE ENCLOSED ............................................... Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ali@pyra.co.uk (Ali Shirnia) Subject: Q: IBM MPP system Reply-To: ali@pyra.co.uk (Ali Shirnia) Organization: Pyramid Technology Ltd. I need to understand the architecture of the IBM MPP system. Can any one help with references? papers? etc? Thanks -m-------- Ali Shirnia Phone : +44 252 373035 ---mmm------ Pyramid Technology Fax : +44 252 373135 -------mmmmmmm-- pyramid!pyrltd!ali !ukc!pyrltd!ali -------mmmmmmmm- ali@pyra.co.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: sitarama@cps.msu.edu (Kocherlakota Sitarama) Newsgroups: sci.math,comp.theory,comp.parallel Subject: Journal Article Acceptance timings Organization: Department of Computer Science, Michigan State University hello -- Do you know of any statistics on the average delay between journal article submission dates -to- journal article acceptance notification dates (not published date) ? I will be interested in Journals which relate to all areas of computer science and math journals which cover a significant computer science articles (ex. Descrete Math. J. Graph Theory etc.,). Thanks in advance, Swamy Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: steved@liv.ac.uk (Mr. S.T. Downing) Subject: SYMPOSIUM: Developments in Parallel Computing Organization: The University of Liverpool Please see attached programme. If you are interested in attending, please return the attached booking form by e-mail, or post as soon as possible. If you have any queries please do not hesitate to contact either Sheila Bilton on Jacqui Cowan on 051 794 4710. International Developments in Parallel Computing November 9th and 10th 1993 A Two-day Symposium organised by ACT-UETP and the Institute of Advanced Scientific Computation (IASC University of Liverpool PROGRAMME Tuesday November 9th 1993 9.30am Parallel Supercomputing - A Review of Activities at Daresbury Laboratory Dr. Rob Allan, Daresbury Laboratory, Warrington 10.30am COFFEE 10.45am New Generations of MPP Platforms Mr. Steve Kitt, Parsys Ltd., London 11.45pm The Centre for Novel Computing: Computing the Future. Mr. Tom Franklin 12.45pm LUNCH 1.45pm A Supercomputer MPP - the Cray T3D Mr. John Fleming, Cray Research, Bracknell 2.45pm The Meiko CS-2MPP Architecture Dr. John Taylor, Meiko Ltd., Bristol 3.45pm TEA 4.00pm Opportunity for discussion and demonstrations 5.00pm CLOSE PROGRAMME Wednesday, November 10th, 1993. 9.30am Evolution of Supercomputer Architectures Professor Mateo Valero, Technical University of Catalunya, Barcelona, Spain 10.30 COFFEE 10.45am Benchmarking for Distributed Memory Parallel Systems: Gaining Insight from Numbers Dr. Cliff Addison, IASC, University of Liverpool 11.45am RUBIS: "Runtime Basic Interface System" for Transputer based machines M. R. Pathenay, Telmat Informatique, France 12.45 LUNCH 1.45pm Parallel Matrix Kernels Dr. Ken Thomas, University of Southampton 2.45pm Problem Solving Environments Professor Theodorou Papatheudorou University of Patras, Greece 3.45pm TEA 4.00pm Performance Evaluation and Monitoring tools Kostas Pantazopoulous, First Informatics, Patras, Greece 5.00pm CLOSE BOOKING INFORMATION Venue: The University of Liverpool. Full details to be sent to all delegates. Cost: ACT-UETP members and academics:- 1 day: 65.00 2 days 100.00 Others 1 day: 95.00 2 days: 150.00 The cost includes course materials, coffee, tea and lunch. Reasonably priced overnight accommodation can be booked on request. The University of Liverpool reserves the right to change programme details. Booking form (Please highlight as appropriate) Please reserve a place on the Symposium "International Developments in Parallel Computing", on: Tuesday November 9th Wednesday November 10th Name Position Organisation Address Phone No: Fax No: My Organisation is a member of ACT-UETP Please let me have details of overnight accommodation I enclose a cheque for .................. to cover the symposium fee Please invoice me Signature Please return form to: e-mail: iasc@liv.ac.uk Sheila Bilton, Institute of Advanced Scientific Computation, University of Liverpool, Victoria Building, Brownlow Hill, Liverpool, L69 3BX. Telephone +44 51 794 4552 Fax +44 51 794 4754 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: csmort@unix1.sncc.lsu.edu (Donald Morton) Subject: Request for T3D info Organization: Louisiana State University InterNetNews Site(test mode) I recently saw a subject heading which I believe was in this newsgroup concerning some information on the T3D architecture. Since I'm new to these newsgroups, I'm still stumbling along, losing interesting postings, etc.! If anyone knows about available documentation on the T3D, I'd be very appreciative if they shared it with me. Thanks in advance, Don Morton Dept. Computer Science Louisiana State University Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 27 Oct 93 15:30:40 +0100 From: lmr@pimac2.iet.unipi.it (Leonardo Reyneri) Subject: Microneuro 94 **************************************************************************** MICRONEURO 94 The Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems Torino (I), September 26-28, 1994 FIRST CALL FOR PAPERS This conference is the fourth in a series of international conferences dedicated to all aspects of hardware implementations of Neural Networks and Fuzzy Systems. MICRONEURO has emerged as the only international forum devoted specifically to all hardware implementation aspects, giving particular weight to those interdisciplinary issues which affect the design of Neural and Fuzzy hardware directly. TOPICS The conference program will focus upon all aspects of hardware implementations of Neural Networks and Fuzzy Systems and their applications in the real world. Topics will concentrate upon the following fields: - Analog and mixed-mode implementations - Digital implementations - Optical systems - Pulse-Stream computation - Weightless Neural systems - Neural and Fuzzy hardware systems - Interfaces with external world - Applications of dedicated hardware - VLSI-friendly Neural algorithms - New technologies for Neural and Fuzzy Systems Selection criteria will be based also on technical relevance, novelty of the approach and on availability of performance measurements for the system/device. INFORMATION FOR AUTHORS All submitted material (written in English) will be refereed and should be typed on A4 paper, 1-1/2 spaced, 12 point font, 160x220 mm text size. All accepted material will appear in the proceedings. PAPERS should not exceed 10 pages including figures and text. Also reports on EARLY INNOVATIVE IDEAS will be considered for presentation. In this case the submission should be a short description of the novel idea, not exceeding 6 pages in length, and it must be clearly marked ``Innovative Idea''. The most interesting papers and ideas will be published in a special issue of IEEE MICRO. SUBMISSIONS Six copies of final manuscripts, written according to the above requirements, shall be submitted to the Program Chairman. Submissions arriving late or significantly departing from length guidelines, or papers published elsewhere will be returned without review. Electronic versions of the submission (possibly in LATEX format) are kindly welcome. DEADLINES Submission of paper and/or ideas May 30, 1994 Notification of acceptance July 15, 1994 THE WORKSHOP VENUE The venue of MICRONEURO '94 is Torino, the historic and beautiful center of Piemonte. The town is surrounded by the highest mountains in Europe and by beautiful hills and landscapes. The region is also famous for its excellent wines. MICRONEURO '94 will be held at the Politecnico di Torino. The venue is conveniently located close to the town centre, with many restaurants and cafes close by. General Chair: H.P. Graf AT T Bell Laboratories Room 4 G 320 HOLMDEL, NJ 07733 - USA Tel. +1 908 949 0183 Fax. +1 908 949 7722 Program Chair: L.M. Reyneri Dip. Ingegneria Informazione Universita' di Pisa Via Diotisalvi, 2 56126 PISA - ITALY Tel. +39 50 568 511 Fax. +39 50 568 522 E.mail lmr@pimac2.iet.unipi.it Organisation: COREP Segr. MICRONEURO '94 C.so Duca d. Abruzzi, 24 10129 TORINO - ITALY Tel. +39 11 564 5108 Fax. +39 11 564 5199 Steering Committee: K. Goser (D) J. Herault (F) W. Moore (UK) A.F. Murray (UK) U. Ramacher (D) M. Sami (I) Program Committee: E. Bruun (DK) H.C. Card (CA) D. Del Corso (I) P. Garda (F) M. Jabri (AU) S.R. Jones (UK) C. Jutten (F) H. Klar (D) J.A. Nossek (D) A. Prieto (E) U. Rueckert (D) L. Spaanenburg (NL) L. Tarassenko (UK) M. Verleysen (B) E. Vittoz (CH) J. Wawrzynek (USA) W. Yang (USA) **************************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 27 Oct 93 12:15:29 EDT From: rjain@faline.bellcore.com (Ravi Jain) Subject: CfP: Parallel I/O workshop CALL FOR PAPERS Second Annual WORKSHOP ON I/O IN PARALLEL COMPUTER SYSTEMS April 26, 1994, Hotel Regina, Cancun, Mexico sponsored by: to be held in conjunction with: IEEE Technical Committee on International Parallel Processing Parallel Processing Symposium (IPPS '94) SYMPOSIUM: The eighth annual International Parallel Processing Symposium (IPPS '94) is sponsored by IEEE Computer Society and will be held in cooperation with ACM SIGARCH. At IPPS '94 engineers and scientists from around the world will present the latest research findings in all aspects of parallel and distributed processing. WORKSHOP: The workshop will be held on the first day of the symposium (April 26, 1994). The workshop will focus on the increasingly important I/O bottleneck facing parallel and distributed computer systems. Applications affected by this bottleneck include Grand Challenge problems as well as newer application areas such as multimedia information systems and visualization. Comprehensive solutions to the parallel I/O bottleneck will include not only innovative hardware and architectural designs, but also new theoretical, operating systems, compilers and applications approaches. As for the first workshop held in 1993, this workshop will aim to bring together researchers in order to compare and integrate theoretical and experimental approaches and solutions. Papers are invited demonstrating original unpublished research. Topics of interest include: I/O-intensive applications Theory of I/O complexity I/O subsystem architecture Compiler support for parallel I/O Operating system support for parallel I/O Scheduling and resource allocation Concurrent and parallel file systems Performance modeling and evaluation REGISTRATION: The workshop is free with registration for IPPS '94. Contact: ipps94@halcyon.usc.edu SUBMITTING PAPERS: All papers will be reviewed. The first page must include a 100-word abstract and the name, address, telephone number and electronic mail address of the author to whom correspondence is to be sent. The manuscript should be at most 20 pages long (including figures and references). Papers will be collected into a proceedings and distributed at the workshop. Send six copies of the paper to: John Werth Dept of Computer Sciences Univ of Texas at Austin Taylor Hall Rm. 2.122 Austin, TX 78712 Phone: (512)-471-9583 Fax: (512)-471-5888 For enquiries by e-mail: ippsio@thumper.bellcore.com SCHEDULE: COMPLETE PAPER DUE: Jan 31, 1994 Authors notified: Mar 15, 1994 Camera-ready copy due: Apr 7, 1994 PROGRAM CO-CHAIRS: Ravi Jain, Bellcore John Werth, U. Texas, Austin J. C. Browne, U. of Texas, Austin PROGRAM COMMITTEE: Peter Chen, Univ. of Michigan Peter Corbett, IBM Watson Tom Cormen, Dartmouth David DeWitt, Univ. of Wisconsin Sam Fineberg, NASA Ames S. Ghandeharizadeh, USC Paul Messina, Caltech Wayne Roiger, Cray Research Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: ehling@rice-chex.ai.mit.edu (Teresa A. Ehling) Newsgroups: comp.parallel,comp.objects,comp.programming,comp.theory Subject: "Research Directions in Concurrent OOP" Organization: MIT Artificial Intelligence Lab Just Released from The MIT Press -- RESEARCH DIRECTIONS IN CONCURRENT OBJECT-ORIENTED PROGRAMMING edited by Gul Agha, Peter Wegner, and Aki Yonezawa This collection of original research contributions provides a comprehensive survey of developments at the leading edge of concurrent object-oriented programming. It documents progress -- from general concepts to specific descriptions -- in programming language design, semantic tools, systems, architectures, and applications. Chapters are written at a tutorial level and are accessible to a wide audience, including researchers, programmers, and technical managers. CONTENTS I Language Issues 1 "Abstracting and Modularity Mechanisms for Concurrent Computing" Gul Agha, Svend Frolund, Woo Young Kim, Rajendra Panwar, Anna Patterson, and Daniel Sturman 2 "Tradeoffs between Reasoning and Modeling" Peter Wegner 3 "A Survey of Logic Programming-Based Object-Oriented Languages" Andrew Davison 4 "Analysis of Inheritance Anomaly in Object-Oriented Concurrent Programming Languages" Satoshi Matsuoka and Akinori Yonezawa 5 "Composing Active Objects" Oscar Nierstrasz II Programming Constructs 6 "Supporting Modularity in Highly-Parallel Programs" Andrew Chien 7 "Multiple Concurrency Control Policies in an Object-Oriented Programming System" Gail Kaiser, Wenwey Hseush, Steven Popovich, and Shyhtsun Wu 8 "Posts for Objects in Concurrent Logic Programs" Sverker Janson, Johan Montelius, and Seif Haridi III Language Design 9 "Specifying Concurrent Languages and Systems w/delta-Grammars" Simon Kaplan, Joseph Loyall, and Steven Goering 10 "Interaction Abstract Machines" Jean-Marc Andreoli, Paolo Ciancarini, and Remo Pareschi 11 "CC++: A Declarative Concurrent Object-Oriented Programming Notation" K. Mani Chandy and Carl Kesselman 12 "A Logical Theory of Concurrent Objects and Its Realization in the Maude Language" Jose Meseguer IV Operating Systems 13 "CHOICES: A Parallel Object-Oriented Operating System" Roy Campbell and Nayeem Islam 14 "COSMOS: An Operating System for a Fine-Grain Concurrent Computer" Waldemar Horwat, Brian Totty, and William Dally V Performance Monitoring 15 "Monitoring Concurrent Object-Based Programs" Bruce Delagi, Nakul Saraiya, and Sayuri Nishimura 532 pages; hardcover $49.95 U.S. ISBN 0-262-01139-5 AGHRH The MIT Press 55 Hayward Street Cambridge, MA 02142 Vox: (800) 356-0343 -or- (617) 625-8569 E-mail: mitpress-orders@mit.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: anderson@cs.unc.edu (James H. Anderson) Subject: PODC94 Call for Papers Date: 27 Oct 1993 13:29:57 -0400 Organization: The University of North Carolina at Chapel Hill IMPORTANT: To reduce expenses, we have decided to not distribute the PODC Call for Papers and Conference Announcement via surface mail this year. So, please think twice before discarding this announcement. ----------**********----------**********----------**********---------- CALL FOR PAPERS 1994 ACM Symposium on Principles of Distributed Computing (PODC) The Thirteenth ACM Symposium on Principles of Distributed Computing (PODC), sponsored by ACM SIGACT and SIGOPS, will be held in Los Angeles, California, USA, August 14-17, 1994. Original research contributions are sought that address fundamental issues in the theory and practice of distributed and concurrent systems. Specially sought are papers that illuminate connections between practice and theory. Topics of interest include, but are not limited to: Distributed algorithms and complexity, Network protocols and architectures, Multiprocessor algorithms and architectures, Distributed operating systems -- principles and practice, Concurrency control and synchronization, Issues of asynchrony, synchrony, and real time, Fault tolerance, Cryptography and security, Specification, semantics, and verification. NEW CONFERENCE FORMAT: This year's conference will have two tracks of presentations. Long presentations will follow the standard format of recent years (25 minute talks), and will be accompanied by 10 page extended abstracts in the proceedings. It is understood that the research reported in these abstracts is original, and is submitted exclusively to this conference. In addition, brief presentations (10 minute talks) are invited as well. These presentations will be accompanied by a short (up to 1 page) abstract in the proceedings. Presentations in this track are understood to reflect early research stages, unpolished recent results, or informal expositions, and are not expected to preclude future publication of an expanded or more polished version elsewhere. (The popular ``rump'' session will still take place this year as well, although it is expected to be shorter given the new track.) SUBMISSIONS: Please send 12 copies of a detailed abstract (printed double-sided if possible) or a short abstract (1 page) with the postal address, e-mail address, and telephone number of the contact author, to the program chair: David Peleg IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, New York 10598 E-mail: peleg@watson.ibm.com To be considered by the committee, abstracts must be received by February 4, 1994 (or postmarked January 28 and sent via airmail). This is a firm deadline. Acceptance notification will be sent by April 15, 1994. Camera-ready versions of accepted papers and short abstracts will be due May 10, 1994. ABSTRACT FORMAT: An extended abstract (for long presentation) must provide sufficient detail to allow the program committee to assess the merits of the paper. It should include appropriate references and comparisons to related work. It is recommended that each submission begin with a succinct statement of the problem, a summary of the main results, and a brief explanation of their significance and relevance to the conference, all suitable for a non-specialist. Technical development of the work, directed to the specialist, should follow. Submitted abstracts should be no longer than 4,500 words (roughly 10 pages). If the authors believe that more details are essential to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the program committee. A short abstract (for brief presentation) should provide a much more concise description (up to 1 page) of the results and their implications. Authors should indicate in the cover letter for which track they wish to have their submission considered. In general, the selection criteria for long presentations are expected to be much more stringent than those for short ones. At the authors' request, a (10-page) submission may be considered for both tracks, with the understanding that it will be selected for at most one. (Such a request will in no way affect the chances of acceptance.) PROGRAM COMMITTEE: James Anderson (University of North Carolina), Brian Bershad (University of Washington), Israel Cidon (Technion and IBM T.J. Watson), Michael J. Fischer (Yale University) Shay Kutten (IBM T.J. Watson), Yishai Mansour (Tel-Aviv University), Keith Marzullo (University of California at San Diego), David Peleg (Weizmann Institute, IBM T.J. Watson and Columbia University), Mark Tuttle (DEC CRL), Orli Waarts (IBM Almaden), Jennifer Welch (Texas A&M University) CONFERENCE CHAIR: James Anderson, University of North Carolina. LOCAL ARRANGEMENTS CHAIR: Elizabeth Borowsky, UCLA. ----------**********----------**********----------**********---------- Jim Anderson anderson@cs.unc.edu PODC94 General Chair Computer Science Dept 919 962-1757 (voice) University of North Carolina 919 962-1799 (fax) Chapel Hill, NC 27599-3175 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: drk@melodian.cs.uiuc.edu (Dave Kohr) Subject: Re: Info on PABLO wanted Sender: news@cs.uiuc.edu Organization: CS Dept., Univ. of Illinois at Urbana-Champaign References: <1993Oct20.121909.28171@hubcap.clemson.edu> In article <1993Oct20.121909.28171@hubcap.clemson.edu> jgarcia@cse.ucsc.edu (Jorge Garcia) writes: >I'm looking for any information available on the PABLO system, from >the University of Illinois (I think). Does anyone know where I can >find technical articles about it, documentation, or any other source >of information? Pablo is a system for the collection, display, and analysis of parallel program performance data, developed by Prof. Daniel A. Reed's research group here at the Univ. of Illinois at Urbana-Champaign. It is available via anonymous FTP from bugle.cs.uiuc.edu, in the directory pub/Release-2.0. Extensive documentation is available in the directory pub/Release-2.0/Documentation. See the README file in that directory for an overview of the available documents. -- Dave Kohr CS Graduate Student Univ. of Illinois at Urbana-Champaign Work: 3244 DCL, (217)333-6561 Home: (217)359-9350 E-mail: drk@cs.uiuc.edu "One either has none or not enough." Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mcc@elan.princeton.edu (Martin C. Carlisle) Subject: Does anyone have parallel C code on pointer data structures? Organization: Princeton University I am currently doing compiler research at Princeton, and am looking for small codes that are parallelized and use pointer data structures (trees, lists, graphs, DAGs, etc.) Examples might be a parallel implementation of shortest path, minimum spanning tree, etc. Thanks for your help. -- Martin C. Carlisle (mcc@cs.Princeton.edu, martinc@pucc.BITNET) OFFICE: Dept. of Comp. Sci. - 35 Olden St.; Princeton, NJ 08544-2087 HOME: 101 Linden Ln.; Princeton, NJ 08540 Phone: (609) 924-8753 (home), 258-1797 (office), 258-1771 (fax) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 27 Oct 1993 16:00:58 -0700 From: peter@euclid.math.usfca.edu (Peter Pacheco) Subject: preferred language? Organization: University of San Francisco Hi, I'd like to know which languages are preferred by people who program distributed memory machines. If you could program in the language of your choice, would you prefer to program in C with message-passing subroutines, C-Linda, Fortran with message-passing subroutines, . . . ? So that I can get a better picture of what people really want, please also let me know what kinds of applications you're writing. Please email responses. I'll summarize to the net. Thanks very much, Peter Pacheco Department of Mathematics University of San Francisco San Francisco, CA 94117 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: news.groups,comp.parallel From: thibaud@ncar.ucar.edu (Francois P. Thibaud) Subject: Re: RFD: comp.parallel.connectmachine Keywords: TMC, CM-5 Organization: NCAR References: Hello Everybody ! I approve the creation of "comp.parallel.connectmachine". The charter is OK as is. I am in charge of the NCAR's CM-5 User's Group and I am planing on creating (or participating in the creation) of an international TMC's CM-2/200/5 Users group "a la" Cray User's Group. Kind Regards ! Francois P. Thibaud Organization: University of Maryland at College Park (UMCP) and The National Center for Atmospheric Research (NCAR) Address: 1850, Table Mesa Drive; PO Box 3000; Boulder CO 80307-3000 USA Phone: (+1)303-497-1707; Fax: (+1)303-497-1137; Room 505, North tower Internet: thibaud@ncar.ucar.edu (thibaud@ra.cgd.ucar.edu) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sek@txfs1.hfb.se (Sven Eklund - HFB T/CPT) Subject: >SuperSPARC v.8 cache organization Organization: Falun/Borlange University College, Borlange, Sweden Date: Thu, 28 Oct 1993 09:52:21 GMT Apparently-To: X-Charset: ASCII X-Char-Esc: 29 Hi! Could anyone out there give me some more info on the cache organization of the SuperSPARC. I have read the "Technical White Paper" on the microprocessor but I'd like to know: o Does the cache use write back or write through? o How many cycles is the miss penalty? o What block size is used? o What does "pseudo" mean, more exactly, in "Pseudo LRU Replacement"? o How much does collision conflicts account for in the 2% miss rate (instruction, 8% miss rate for data cache) If you have any info, or know were I could get some answers, please e-mail me. Thanks! /Sven Eklund, sek@hfb.se Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: tum.soft,tum.info.soft,comp.parallel From: pleierc@Informatik.TU-Muenchen.DE (Christoph Pleier) Subject: Distributed programming with C on heterogeneous UNIX-networks! Originator: pleierc@hpeick9.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Distributed programming with C on heterogeneous UNIX-networks! The Distributed C Development Environment is now availible by anonymous ftp from ftp.informatik.tu-muenchen.de in the directory /local/lehrstuhl/eickel/Distributed_C. The Distributed C Development Environment was developed at Technische Universitaet Muenchen, Germany, at the chair of Prof. Dr. J. Eickel and is a collection of tools for parallel and distributed programming on single- processor-, multiprocessor- and distributed-UNIX-systems, especially on heterogenous networks of UNIX computers. The environment's main purpose is to support and to simplify the development of distributed applications on UNIX networks. It consists of a compiler for a distributed programming language, called Distributed C, a runtime library and several useful tools. The programming model is based on explicit concurrency specification in the programming language DISTRIBUTED C, which is an extension of standard C. The language constructs were mainly taken from the language CONCURRENT C developed by N. Gehani and W. D. Roome and are based on the concepts for parallel programming implemented in the language ADA. Distributed C makes possible the common programming in C together with the user-friendly programming of process management, i. e. the specification, creation, synchronization, communication and termination of concurrently executed processes. The Distributed C Development Environment supports and simplifies the dis- tributed programming in several ways: o Development time is reduced by checking Distributed C programs for errors during compilation. Because of that, errors within communication or synchronization actions can be easier detected and avoided. o Programming is simplified by allowing the use of simple pointer types even on loosely-coupled systems. This is perhaps the most powerful feature of Distributed C. In this way, dynamic structures like chained lists or trees can be passed between processes elegantly and easily - even in heterogeneous networks. Only the anchor of a dynamic structure must be passed to another process. The runtime system automatically allocates heap space and copies the complete structure. o Developement is user-friendly by supporting the generation and installation of the executable files. A special concept was developed for performing the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. o Programming difficulty is reduced by software-aided allocating processes at runtime. Only the system administrator needs to have special knowledge about the target system's hardware. The user can apply tools to map the processes of a Distributed C program to the hosts of a concrete target system. o Execution time is reduced by allocating processes to nodes of a network with a static load balancing strategy. o Programming is simplified because singleprocessor-, multiprocessor- and distributed-UNIX-systems, especially homogeneous and heterogeneous UNIX- networks can be programmed fully transparently in Distributed C. The Distributed C Development Environment consists mainly of the tools: o Distributed C compiler (dcc): compiles Distributed C to standard C. o Distributed C runtime library (dcc.a): contains routines for process creation, synchonization, ... o Distributed C administration process (dcadmin): realizes special runtime features. o Distributed C installer program (dcinstall): performes the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. The environment runs on the following systems: o Sun SPARCstations (SunOS), o Hewlett Packard workstations (HP/UX), o IBM workstations (AIX), o Convex supercomputers (ConvexOS), o IBM Workstations (AIX). o homogeneous and heterogeneous networks of the systems as mentioned above. Moreover the implementation was designed for the use on Intel iPSC/2s. The Distributed C Development Environment source code is provided "as is" as public domain software and distributed in the hope that it will be useful, but without warranty of any kind. Keywords: distributed programming, parallel programming, Distributed C -- Christoph Pleier pleierc@informatik.tu-muenchen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: James.Cownie@meiko.com (James Cownie) Subject: Re: 65 processor Meiko for sale. References: <1993Oct21.233844.3378@jade.ab.ca> > We have some Meiko equipment for sale. The backplanes may be more valuable > than the boards, in that it may support newer boards (can anyone comment > on this?). The M40 and M10 backplanes are also compatible with Meiko's i860 based (MK096 2xi860 8-32MB each) boards, or you could roll a Sparc in using the MK083 board. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ruehl@cscs.ch (Roland Ruehl) Subject: ICS 94 Message-ID: <1993Oct28.122443.20292@cscs.ch> Keywords: CFP Sender: usenet@cscs.ch (NEWS Manager) Nntp-Posting-Host: zermatt.cscs.ch Reply-To: ruehl@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland Date: Thu, 28 Oct 1993 12:24:43 GMT Could anybody please mail me the CFP for ICS 94 ? Thank you in advance, Roland. --------------------------------------------------------------------------- Dr Roland Ruehl Phone (Manno): +41 (91) 50 8232 Section of Research and Development (SeRD) CSCS-ETH (Swiss Scientific Computing Center) FAX: +41 (91) 50 6711 6928 Manno, Switzerland E-mail: ruehl@cscs.ch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fortuna@cs.man.ac.uk (Armando Fortuna) Subject: CFD Newsgroup - Call For Votes Date: 28 Oct 93 14:35:44 GMT Organization: Dept Computer Science, University of Manchester, U.K. Hello everyone, I am interested in creating a Computational Fluid Dynamics newsgroup. It would be called comp.fluid.dynamics Since voting is necessary in order to include it within the "comp" hierarchy, I would ask all interested parties to please send your votes to this newsgroup. As soon as the minimum quorum is reached (about 150 "yes"), the newsgroup will be created. I don't feel it is necessary for comp.fluid.dynamics to be moderated in any way, so it won't be. The topics discussed will be anything related to computational fluid dynamics: * algorithms and their implementation, * new developments, * ... and anything interesting anyone may come up with! Everyone interested please vote!! Let's see if this time, CFD is going to get its newsgroup. Armando fortuna@cs.man.ac.uk -- Armando de Oliveira Fortuna E-mail: fortuna@cs.man.ac.uk Dept. of Computer Science Tel.: +44 61 275-6132 University of Manchester Fax.: +44 61 275-6204 Oxford Road, Manchester Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dmb@gorm.lanl.gov (David M. Beazley) Subject: Re: The Future of Parallel Computing In-Reply-To: dbader@eng.umd.edu's message of Tue, 19 Oct 1993 15:49:29 GMT Message-ID: Organization: Los Alamos National Laboratory, NM References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> In article <1993Oct19.154929.15823@hubcap.clemson.edu> dbader@eng.umd.edu (David Bader) writes: > That is, is the hardware growing too fast and neglecting the > software and algorithmic concerns? > > [text deleted] > > Is the same true for the emergence of parallel computing? In my > opinion, no. We have not ground out a "standard" representation for > the model of parallel computing. We have not put enough effort into > the theory of parallel algorithmics. Throwing faster hardware at us > will not solve the problem. Even if the benchmark time for a given > application is cut in half, what happens as we try to increase the > problem size by a factor of K ? The compiler then must have the task > of decomposing the algorithm onto the underlying hardware. It is just > wrong to require the programmer to have a detailed knowledge of the > hardware, data layout, and compiler tricks just to get anywhere near > "benchmarked" performance rates. The ONLY way to get really high performance on any system is to have some understanding of the underlying hardware. One can not expect any compiler to take some arbitrary code and make it run anywhere close to peak performance. Getting high performance takes alot of work and it's up to the programmer to make it work. Even on personal computers (which have well-established compilers) taking the machine architecture into consideration can improve code performance significantly. > We are now in an age when the high performance machines have > various data network topologies, i.e. meshes, torii, linear arrays, > vector processors, hypercubes, fat-trees, switching networks, etc.. > etc.. These parallel machines might all have sexy architectures, but > we are headed in the wrong direction if we don't take a step back and > look at the future of our work. We shouldn't have to rewrite our > algorithms from scratch each time our vendor sells us the latest > hardware with amazing benchmarks. Benchmarks should also be > attainable from STANDARD compiler options. We should NOT have to > streamline routines in assembly language, give data layout directives, > nor understand the complexities of the hardware and/or data network. Again, sometimes writing routines in assembly code and mapping your data-layout to the machine is the only way to get high performance. We've been working on a C code for the CM-5 and the only way to use the vector processors has been to write code in CDPEAC (assembler code). While this has taken alot of work, our performance has jumped from 2 Gflops to rates between 25-50 Gflops (varies on what problem is being solve of course). We would have never seen this improvement in performance if we had relied on a compiler to optimize everything for us (our code is running faster than other codes that have relied solely on the compiler for optimization and use of the vector units). When running large production simulations, our efforts pay off because it can mean the difference between taking 30 hours of CPU time or 300 hours (and time is money as they say). Simply put, if you're not willing to put in the effort to make your code run really fast, then it never will--no matter what compiler you use. There's just no way that a compiler is going to know the best way to map out all possible problems on any particular machine. In my opinion, relying on the compiler to do everything for you encourages lazy programming and contributes nothing to pushing the performance limits of parallel computing hardware or software. Dave Beazley dmb@viking.lanl.gov ------------------------------------------------------------ Opinions expressed here not necessarily those of my employer. (or mine for that matter). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djb1@ukc.ac.uk (Dave Beckett) Subject: [LONG] Transputer, occam and parallel computing archive: NEW FILES Organization: Computing Lab, University of Kent at Canterbury, UK. This is the new files list for the Transputer, occam and parallel computing archive. Please consult the accompanying article for administrative information and the various ways to access the files. [For experts: ftp to unix.hensa.ac.uk and look in /parallel] Dave NEW FEATURES ~~~~~~~~~~~~ * FULL TEXT INDEX A nightly full text index is now being generated, of all the individual Index files. This is probably the best way to find something by 'grepping' the file although it is very large. /parallel/index/FullIndex.ascii 223327 bytes /parallel/index/FullIndex.ascii.Z 74931 bytes (compressed) /parallel/index/FullIndex.ascii.gz 52024 bytes (gzipped) * MIRRORED PACKAGES I am mirroring (keeping up to date) several parallel computing and related packages including Adaptor, SR, VCR, PARMACS, F2C, P4 etc and intend to add more (suggestions welcome). NEW AREAS ~~~~~~~~~ /parallel/software/linux LINUX related device drivers and ports of transputer software. /parallel/reports/announcements /parallel/papers/announcements New areas for anouncements of reports and papers. NEW FILES since 11th October 1993 (newest first) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /parallel/software/miami/mpsim.tar.Z Small multiprocessor simulator package written by Felix Quevedo at the University of Miami. ASCII and PS versions of the documentation are provided as well as troff source. Works on Sun3-SunOS3.5, Sun4-SunOS4.0.3, VAX-Ultrix3.0 and MAC-A/UX1.1. /parallel/conferences/int-dev-par-comp Details of two-day Symposium on International Developments in Parallel Computing being held from 9th-10th November 1993 at the University of Liverpool organised by ACT-UETP and the Institute of Advanced Scientific Computation (IASC), University of Liverpool. /parallel/software/washington-seattle/chaos Chaos Simulator Package - a compiled routing network simulator written by the Chaos router team at the University of Washington, Seattle, USA. /parallel/reports/announcements/genetic-alg-robotics Announcement of Technical Report available for FTP about Genetic Algorithms for robotics (in French). "Algorithmes genetiques paralleles pour la planification de trajectoires de robots en environnement dynamique" by Thierry Chatroux of Institut Imag, Grenoble, France. /parallel/conferences/can-fr-conf-par-comp Final announcement and call for papers for the Canada-France conference on Parallel Computing being held from 18th-20th May 1994 at Concordia University, Montreal, Canada the week before the 1994 ACM Symposium on Theory of Computing. Deadlines: 31st October 1993: Submission of papers; 31st January 1994: Acceptance; 28th February 1994: Camera-ready copy. /parallel/software/rice/netsim/README Details and copyright message for the package: READ THIS FIRST. (No commercial use of software allowed) /parallel/software/rice/netsim/sim.tar.Z A family of discrete-event simulators based on the C programming language by J. Robert Jump of Rice University, Houston, Texas, USA. Contains YACSIM - A process-oriented discrete-event simulator, NETSIM - A general-purpose interconnection network simulator and DBSIM - A debugging utility for use with any of the simulators. Archive contains source code and documentation and papers in PostScript. /parallel/reports/misc/twelve-ways-to-fool-masses.tex /parallel/reports/misc/twelve-ways-to-fool-masses.ps.Z Report: "Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers" by David Bailey posted to comp.parallel by Andrew Stratton in LaTeX form (twelve-ways-to-fool-masses.tex) and compressed Postscript (twelve-ways-to-fool-masses.ps.Z). /parallel/bibliographies/parallelism-biblos-FAQ Frequently Asked Questions (FAQ) about the bibliography on parallelism maintained by Eugene N. Miya for the last ten+ years. Written by the maintainer. /parallel/faqs/conferences/opopac-workshop-programme More programme details for the International Workshop On Principles Of PArallel Computing (OPOPAC) / "Journees internationales sur des problemes fondamentaux de l'informatique parallele et distribuee" being held from 23rd-26th November 1993 in Lacanau, France. [See also /parallel/faqs/conferences/opopac-workshop]. /parallel/faqs/parallel-Fourier-transforms Summary of some citations for parallel Fourier transform papers/algorithms by Mike Gross . /parallel/faqs/message-passing-simulators Summary of responses to a query about freely available message passing software simulators by David C Blight and details of how to obtain it. /parallel/conferences/esa94 Call for papers for 2nd Annual European Symposium On Algorithms (ESA'94) being held from 26th-28th September 1994 near Utrecht, The Netherlands. Deadlines: Extended Abstract/Full draft paper: 25th March 1994; Acceptance: 20th May 1994; Camera-ready copy: 20th June 1994. /parallel/conferences/rsp94 Call for papers for 5th IEEE International Workshop on Rapid System Prototyping (RSP 94) being held from 21st-23rd June 1994 at Grand Hotel de Paris at Villard de Lans, Grenoble, France. Conferences is in English. Deadlines: Papers: 10th January 1994; Acceptance: 10th February 1994; Camera-ready copy: 24th Marh 1994. /parallel/conferences/jisc-ntsc-cluster-workshop Details of the JISC/NTSC Workshop on Cluster Computing being held on Tuesday 2nd November 1993 at the University of Edinburgh. /parallel/faqs/industrial-par-tools Summary of responses about a query on tools for industrial ((non-research) parallel programming by Richard Schooler /parallel/software/linux/assembler/assembler.announcement Announcement of new version of assembler and servers written by Michael Haardt for Linux 0.99.13 /parallel/software/linux/assembler/asld.tz New version of assembler and linker (gzipped tar file) /parallel/software/linux/assembler/b004.tz Patches for Christoph Niemann's link device driver for Linux 0.99.13. For documentation, see his original release. (gzipped tar file) /parallel/software/linux/assembler/server.tz Iserver, AFserver and cserver (gzipped tar file) /parallel/software/linux/assembler/tdis.tz Disassembler (gzipped tar file) /parallel/software/ipsc/ipscs-to-pvm3.1 Announcement of v0.1 of Intel iPSC to PVM3.1 library written by J. Sunny Egbo et al /parallel/software/ipsc/i2pvm0.1.tar.Z Version 0.1 of the Intel-iPSC to PVM Library: A set of routines to allow Intel iPSC programs to run under a Parallel Virtual Machine. CAVEATS: 1. This version (0.1) only works for PVM3.1 and supports most of the Intel iPSC node calls, and a few host programs. 2. We are working on the next release which includes most of the host library calls, host programs, and will run under PVM3.2. /parallel/bibliographies/par-functional Directory containing an annotated bibliography on parallel functional programming (including BibTeX sources) listing more than 350 publications mostly including their full abstracts by Wolfgang Schreiner /parallel/conferences/sms94 Call for papers for 2nd International Conference on Software for Multiprocessors and Supercomputers- Theory, Practice, Experience (SMS TPE'94) being held in early September 1994 in Mosco, Russia. Organising institutions are the Computing Centre of the Russian Academy of Sciences and the Institute for the Problems of Cybernetics of the Russian Academy of Sciences. Deadlines: Abstracts: 31st December 1993; Acceptance: 1st March 1994; Camera-ready copies: 1st May 1994. /parallel/documents/misc/optical-nets-in-multiprocessors.announce Announcement of PhD thesis by James Olsen : "Control and Reliability of Optical Networks in Multiprocessors" available in both Postscript and LaTeX formats via anonymous ftp. /parallel/conferences/shpcc94 Update to call for papers for 1994 Scalable High Performance Computing Conference (SHPCC94) to beld from 23rd-25th May 1994 in Knoxville, Tennessee, USA. Deadlines: PAPERS: Extended Abstracts: 1st November 1993; Acceptance: 14th January 1994; Camera-ready copies: 14th February 1994. POSTERS: Short Abstracts: 1st November 1993; Acceptance: 14th January 1994; TUTORIALS: Proposals: 1st November 1993; Acceptance: 14th January 1994. /parallel/conferences/ieee-workshop-visualisation-machine-vision Call for papers for the IEEE Workshop on Visualization and Machine Vision being held on the 24th June 1994 at The Westin Hotel, Seattle, Washington, USA (A day after CVPR at the same site. So researchers can stay an extra day and attend the workshop) Sponsored by IEEE Computer Society: Pattern Analysis and Machine Intelligence Technical Committee and Computer Graphics Technical Committee. Deadline: 13th December 1993. /parallel/conferences/PEPS-programme Programme and Registration Details for the Workshop on the Performance Evaluation and Assessment of Parallel Systems (PEPS) sponsored by the EC ESPRIT 6942 programme, the BCS and the TTCP XTP3 Technical Panel on Computer Architectures and organised by the PEPS Consortium. Workshop being held from 29th-30th November 1993 at University of Warwick, Coventry, UK. /parallel/reports/announcements/GRAPE-report-kahaner Announcement of reports by Dr. David Kahaner on the GRAPE (GRAvity PipE) computer, developed at the University of Tokyo, for simulation of N-body systems. The newest version will have TFLOPs performance, using 2000 600MFLOP chips. Posted by Rick Schlichting /parallel/journals/ppl Added Parallel Processing Letters details, call for papers and contents of some of the issues. /parallel/courses/parasoft-par-prog-course Introductory course on the theory and practice of distributed and parallel computing held by ParaSoft Corporation from 10th-12th December 1993 at Florida State University, Tallahassee, Florida, USA (At the end of Cluster Workshop) /parallel/user-groups/ppc/PPC-October Announcement of the October PPC Meeting held on October 11th. [Posted too late to arrive before the meeting date] /parallel/books/morgan-kaufmann/parallel-processing-from-applications-to-systems Announcement of book: "Parallel Processing from Applications to Systems" by Dan I. Moldovan, University of Southern California published by Morgan Kaufmann and details of contents and costs. /parallel/conferences/wotug17 Call for papers for the 17th World occam and Transputer User Group (WoTUG) Technical Meeting being held from 11th-13th April 1994 at the University of Bristol, UK. Deadlines: Extended abstracts: 1st November 1993; Notification: Mid December 1993; Camera read copy: 10th January 1994. /parallel/conferences/EURO-PAR-site-bids Call for conference site bids for the new series of European Conferences called EURO-PAR (merge of CONPAR/VAPP and PARLE). Deadline for bids is Friday 5th November. OTHER HIGHLIGHTS ~~~~~~~~~~~~~~~~ * occam 3 REFERENCE MANUAL (draft) /parallel/documents/occam/manual3.ps.Z By Geoff Barrett of INMOS - freely distributable but copyrighted by INMOS and is a full 203 page book in the same style of the Prentice Hall occam 2 reference manual. Thanks a lot to Geoff and INMOS for releasing this. * TRANSPUTER COMMUNICATIONS (WoTUG JOURNAL) FILES /parallel/journals/Wiley/trcom/example1.tex /parallel/journals/Wiley/trcom/example2.tex /parallel/journals/Wiley/trcom/trcom.bst /parallel/journals/Wiley/trcom/trcom01.sty /parallel/journals/Wiley/trcom/trcom02.sty /parallel/journals/Wiley/trcom/trcom02a.sty /parallel/journals/Wiley/trcom/transputer-communications.cfp /parallel/journals/Wiley/trcom/Index /parallel/journals/Wiley/trcom/epsfig.sty LaTeX (.sty) and BibTeX (.bst) style files and examples of use for the forthcoming Wiley journal - Transputer Communications, organised by the World occam and Transputer User Group (WoTUG). See transputer-communications.cfp for details on how to submit a paper. * FOLDING EDITORS: origami, folding micro emacs /parallel/software/folding-editors/fue-original.tar.Z /parallel/software/folding-editors/fue-ukc.tar.Z /parallel/software/folding-editors/origami.zip /parallel/software/folding-editors/origami.tar.Z Two folding editors - origami and folding micro-emacs traditionally used for occam programming environments due to the indenting rules. Origami is an updated version of the folding editor distribution as improved by Johan Sunter of Twente, Netherlands. fue* are the original and UKC improved versions of folding micro-emacs. * T9000 SYSTEMS WORKSHOP REPORTS /parallel/reports/wotug/T9000-systems-workshop/* The reports from the T9000 Systems Workshop held at the University of Kent at Canterbury in October 1992. It contains ASCII versions of the slides given then with the permission of the speakers from INMOS. Thanks to Peter Thompson and Roger Shepherd for this. Subjects explained include the communications architecture and low-level communications, the processor pipeline and grouper, the memory system and how errors are handled. * THE PETER WELCH PAPERS /parallel/papers/ukc/peter-welch Eleven papers by Professor Peter Welch and others of the Parallel Processing Group at the Computing Laboratory, University of Kent at Canterbury, England related to occam, the Transputer and other things. Peter is Chairman of the World occam and Transputer User Group (WoTUG) * ISERVERS /parallel/software/inmos/iservers Many versions of the iserver- the normal version, one for Windows (WIserver), one for etherneted PCs (PCServer) and one for Meiko hardware. * MIRROR OF PARLIB /parallel/parlib Mirror of the PARLIB archive maintained by Steve Stevenson, the moderator of the USENET group comp.parallel. * UKC REPORTS /pub/misc/ukc.reports The internal reports of the University of Kent at Canterbury Computing Laboratory. Many of these contain parallel computing research. * NETLIB FILES /netlib/p4 /netlib/pvm /netlib/pvm3 /netlib/picl /netlib/paragraph /netlib/maspar As part of the general unix.hensa.ac.uk archive, there is a full mirror of the netlib files for the above packages (and the others too). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djb1@ukc.ac.uk (Dave Beckett) Subject: [LONG] Transputer, occam and parallel computing archive: ADMIN Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: Loads more files. See NEW FILES archive for details. Keywords: transputer, occam, parallel, archive This is the administrative information article for the Transputer, occam and parallel computing archive. Please consult the accompanying article for details of the new files and areas. In the last two and a half weeks I've added another 8 megabytes of files to the archive at unix.hensa.ac.uk in /parallel. It currently contains over 57 Mbytes of freely distributable software and documents, in the transputer, occam and parallel computing subject area. STATISTICS ~~~~~~~~~~ >2940 users accessed archive (560 more than last time) >1400 Mbytes transfered (200MB more) since the archive was started in early May. Top 10 files accessed, excluding Index files 679 /parallel/README 340 /parallel/pictures/T9000-schematic.ps.Z 316 /parallel/reports/misc/soft-env-net-report.ps.Z 264 /parallel/documents/inmos/occam/manual3.ps.Z 201 /parallel/Changes 162 /parallel/software/folding-editors/origami.tar.Z 161 /parallel/reports/ukc/T9000-systems-workshop/all-docs.tar.Z 142 /parallel/index/ls-lR.Z 129 /parallel/books/prentice-hall 109 /parallel/books/occam-books It's gratifying to see the top level README has such interest! WHERE IS IT? ~~~~~~~~~~~~ At the HENSA (Higher Education National Software Archive) UNIX archive. The HENSA/UNIX archive is accessible via an interactive browsing facility, called fbr as well as email, DARPA ftp, gopher and NI-FTP (Blue Book) services. For details, see below. HOW DO I FIND WHAT I WANT? ~~~~~~~~~~~~~~~~~~~~~~~~~~ The files are all located in /parallel and each directory contains a short Index file of the contents. If you want to check what has changed in between these postings, look at the /parallel/Changes file which contains the new files added. There is also a full text index available of all the files in /parallel/index/FullIndex.ascii but be warned - it is very large (over 200K). Compressed and gzipped versions are in the same directory. For those UNIX dweebs, there are output files of ls-lR in /parallel/index/ls-lR along with compressed and gzipped versions too. HOW DO I CONTACT IT? ~~~~~~~~~~~~~~~~~~~~ There are several ways to access the files which are described below - log in to the archive to browse files and retrieve them by email; transfer files by DARPA FTP over JIPS or use Blue Book NI-FTP. Logging in: ~~~~~~~~~~~ JANET X.25 network: call uk.ac.hensa.unix (or 000049200900 if you do not have NRS) JIPS: telnet unix.hensa.ac.uk (or 129.12.21.7) Once connected, use the login name 'archive' and your email address to enter. You will then be placed inside the fbr restricted shell. Use the help command for up to date details of what commands are available. Transferring files by FTP ~~~~~~~~~~~~~~~~~~~~~~~~ DARPA ftp from JIPS/the internet: site: unix.hensa.ac.uk (or 129.12.21.7) login: anonymous password: Use the 'get' command to transfer a file from the remote machine to the local one. When transferring a binary file it is important to give the command 'binary' before initiating the transfer. For more details of the 'ftp' command, see the manual page by typing 'man ftp'. The NI-FTP (Blue Book) request over JANET path-of-file from uk.ac.hensa.unix Username: guest Password: The program to do an NI-FTP transfer varies from site to site but is usually called hhcp or fcp. Ask your local experts for information. Transferring files by Email ~~~~~~~~~~~~~~~~~~~~~~~~~~ To obtain a specific file email a message to archive@unix.hensa.ac.uk containing the single line send path-of-file or 'help' for more information. Browsing and transferring by gopher ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >From the Root Minnesota Gopher gopher, select the following entries: 8. Other Gopher and Information Servers/ 5. Europe/ 37. United Kingdom/ 14. HENSA unix (National software archive, University of Kent), (UK)/ 3. The UNIX HENSA Archive at the University of Kent at Canterbury/ 9. PARALLEL - Parallel Computing / and browse the archive as normal. [The numbers are very likely to change] The short descriptions are abbreviated to fit on an 80 column display but the long ones can always be found under 'General Information.' (the Index files). Updates to the gopher tree follow a little behind the regular updates. COMING SOON ~~~~~~~~~~~ A better formatted bibliograpy of the IOS press (WoTUG, NATUG et al) books. A HUGE bibliography of occam papers, PhD theses and publications - currently about 2000 entries. The rest of the INMOS archive server files. WoTUG related papers and information. NATUG information and membership form. A transputer book. A freely distributable occam compiler for workstations. A couple of free occam compiler for transputers. DONATIONS ~~~~~~~~~ Donations are very welcome. We do not allow uploading of files directly but if you have something you want to donate, please contact me. Dave Beckett Computing Laboratory, University of Kent at Canterbury, UK, CT2 7NF Tel: [+44] (0)227 764000 x7684 Fax: [+44] (0)227 762811 Email: djb1@ukc.ac.uk Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Voting on CFV's on newsgroup formation To those interested in voting on new newsgroups. You must send your votes to the appropriate place. Posted votes won't count. =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pan@udcps3.cps.udayton.edu (Yi Pan) Subject: VLSI area of 3-D Mesh Organization: The University of Dayton Computer Science Dept., Dayton, OH Date: Thu, 28 Oct 1993 18:20:24 GMT Hi, COuld anyone give me some pointer on how to derive the VLSI area of 3-D mesh? If you know any references in this area, please let me know. Many thanks, Yi Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: blj@crhc.uiuc.edu (Bob Janssens) Newsgroups: comp.parallel,comp.sys.misc,comp.os.misc Subject: CM5 emulation under SunOs? Date: 28 Oct 1993 15:52:55 -0500 Organization: Center for Reliable and High-Performance Computing, University of Illinois at Urbana-Champaign Since Thinking Machines' CM5 runs a modified version of SunOs it seems to me like it should be relatively simple to write a CM5 emulator that runs on my SPARCstation under SunOs. Does such an emulator exist and is it available? Thanks, Bob Janssens janssens@uiuc.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on CM-5, iPSC/860 Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: boxer.nas.nasa.gov Organization: NAS/NASA-Ames Research Center Date: Thu, 28 Oct 1993 23:19:05 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov J. Eric Townsend (jet@nas.nasa.gov) last updated: 18 Oct 1993 This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-users -- discussion of using the TMC CM-5 cm5-managers -- " " administrating the TMC CM-5 ipsc-users -- " " using the Intel iPSC/860 ipsc-admin -- " " administrating the iPSC/860 The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@boxer.nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@boxer.nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@boxer.nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: "Bruce Shriver" Subject: Call for Proposals - HICSS-28 Date: Thu, 28 Oct 93 19:52:16 -0500 Organization: PSI Public Usenet Link Call For Minitrack Proposals Parallel and Distributed Computing: Theory, Systems, and Applications The Software Technology Track of HICSS-28 28th Hawaii International Conference on System Sciences MAUI, HAWAII - JANUARY 3-6, 1995 ====================================== You are invited to submit a proposal for a minitrack for HICSS-28. The HICSS series of conferences has become a unique and respected forum in computer and information systems and technology for the exchange of ideas among the researcher and development communities in North America, the Asian and Pacific Basin Nations, and Europe. HICSS-28 will have Tracks in Computer Architecture, Software Technology, Biotechnology, and Information Systems. A Track consists of three full days of technical sessions, coupled with a set of advanced seminars and tutorials. A minitrack is either a half day or a full day of technical sessions. All sessions are conducted in a workshop- like setting and participants often participate in several different tracks. HICSS is sponsored by the University of Hawaii in cooperation with the IEEE Computer Society and the ACM. This particular solicitation is for the Software Technology Track which will focus on "Parallel and Distributed Computing: Theory, Systems, and Applications." The topics include: - - Application portability across diverse architectures - - Compilers and optimizations - - Debugging and performance analysis environments - - Design languages and methods for distributed/parallel systems - - Development environments - - Distributed/parallel algorithms (design and analysis) - - Distributed/parallel operating systems (including multiprocessor design issues, cache issues, communications protocols, task allocation and scheduling, load balancing, distributed microkernel systems) - - Fault tolerance in distributed/parallel systems - - Migration to distributed/parallel systems - - New and innovative paradigms, languages, and language constructs - - Real-Time (hard deadline) systems - - Scalability issues - - Semi-automatic and automatic code generation tools - - Software engineering of distributed/parallel systems - - Software frameworks and specific industrial and business applications such as - - Very large database query, data mining and decision support systems - - Distributed/parallel on-line transaction processing (OLTP) systems - - Enterprise-wide distributed/parallel computing systems - - Distributed/parallel multi-media and communications systems The proposals should be on timely and important topics in the field. Your proposal should be from five to six pages long and should: 1) Define the proposed technical area, discuss the topics the minitrack will address, and describe how they fit into the area; 2) Discuss how these topics have recently been covered in other conferences and publications to substantiate that HICSS is not only an appropriate and timely forum for the topics but also that there is a body of unpublished good work to draw from; and, 3) Contain a short bio-sketch and an explicit statement that your organization endorses your involvement and attendance and has the infrastructure to support that involvement as described in the attached sheets giving the Responsibilities of Minitrack Coordinators. Interacting with authors and referees in a fair and professional manner and employing control mechanisms that increase the overall quality of the meeting are among the major responsibilities. We highly encourage the submission of proposals by e-mail as this will significantly reduce the processing of the reviews. If sent by surface mail, send thirteen copies. Your proposal should be e-mailed to Dr. Hesham El-Rewini (e-mail: rewini@unocss.unomaha.edu). The deadlines are: November 26, 1993 Proposals Due December 31, 1993 Notification Regarding the Proposals Each proposal will be evaluated by the Advisory Committee whose decision will be based on the overall technical merit of the proposal. Since there is only a limited amount of space for conducting the meeting, the number of proposals that will be approved in each track is limited. We are looking forward to receiving a proposal from you. Sincerely, Track Co-Chairs =============== Hesham El-Rewini Bruce Shriver Department of Computer Science HICSS-28 Co-Chairman University of Nebraska at Omaha 17 Bethea Drive Omaha, NE 68182 Ossining, NY 10562-1620 e-mail: rewini@unocss.unomaha.edu e-mail: b.shriver@computer.org Voice: (402) 554-2852 Voice: (914) 762-3251 FAX: (402) 554-2975 FAX: (914) 941-9181 The HICSS-28 Software Technology Track Advisory Committee --------------------------------------------------------- - - Dharma Agrawal, North Carolina State University, USA - - Selim Akl, Queen's University, CANADA - - Fran Berman, University of California at San Diego, USA - - Karsten M. Decker, Swiss Scientific Computing Center, SWITZERLAND - - Ahmed K. Elmagarmid, Purdue University, USA - - Hesham El-Rewini, University of Nebraska at Omaha, USA - - Jeff Kramer, Imperial College, UK - - Tore Larsen, Tromso University, NORWAY - - Harold W. Lawson, Lawson Publishing and Consulting Inc., SWEDEN - - Stephan Olariu, Old Dominion University, USA - - M. Tamer Ozsu, University of Alberta, CANADA - - David Padua, University of Illinois, USA - - Cherri Pancake, Oregon State University, USA - - Greg Riccardi, Florida State University, USA - - Bruce Shriver, University of Southwestern Louisiana, USA - - Alok Sinha, Microsoft, USA - - Ivan Stojmenovic, University of Ottawa, CANADA THE RESPONSIBILITIES OF A MINITRACK COORDINATOR *********************************************** 1. Workshop-like Setting ************************ A HICSS minitrack consists of two or four 90-minute sessions conducted in a workshop-like setting. Each session consists of three papers; the papers are allotted thirty minutes for presentation and questions. The last session should include a forum which typically is a lively, open dialogue on the issues raised in the presentations. You are to solicit manuscripts, have them refereed, collaborate with the Track Coordinator in determining which manuscripts are to be accepted, structure the sessions, introduce the speakers in your sessions, and act as the moderator of the forum. 2. Solicit Manuscripts for the Minitrack **************************************** After your minitrack has been approved by your Track Coordinator, you are encouraged to distribute the Unified Call for Papers and Referees and place it on appropriate electronic bulletin boards. This call will be prepared by your track coordinator and cover all minitracks in the Software Technology Track. You should solicit high-quality manuscripts from people who are known to do excellent work in the field. We recommend you contact potential authors and referees to describe the overall objectives of the conference and the minitrack and solicit their ideas, a manuscript, or a commitment to referee. Each manuscript should be 22-25 type written, double-spaced pages in length. Do not accept submissions that are significantly shorter or longer than this. The material must contain original results and not have been submitted elsewhere while it is being evaluated for acceptance to HICSS. Manuscripts that have already appeared in publication are not to be con- sidered for this conference. 3. Acquire Referees Who Will Critically Review the Submitted Manuscripts ************************************************************************ Quality refereeing is essential to ensure the technical credibility of HICSS. Each manuscript should be stringently reviewed by a number of qualified people who are actively working in the topics dealt with in the paper. You are responsible for having each manuscript submitted to you reviewed by at least five people in addition to yourself. The author should only be given reviews that are technically substantive. If you wish to submit a paper to your own minitrack, six copies should be sent to the Track Coordinator who will administer the refereeing process. Do not use authors of manuscripts as referees as this potentially places them in a conflict of interest situation. HICSS does not have "invited" manuscripts; all submissions go through a rigorous peer refereeing process. 4. Accept Manuscripts for the Minitrack *************************************** A full-day minitrack should accept nine papers. A half-day minitrack should accept five. To ensure excellent accepted papers, typically more than two to three times the number of papers needed must actually be solicited. Many papers will not meet our quality standards (i.e., will not make it through the refereeing process) and some authors may not be able to fulfill their initial commitment and complete the paper for you. If nine technically solid papers do not survive the refereeing process, the full-day minitrack can be changed to a half-day minitrack. 5. Re-publication of the Manuscripts ************************************ We encourage you to work with an Editor-in-Chief of a professional society periodical to use your accepted papers as the basis of a special issue of the publication. Such an arrangement encourages quality submission and requires good refereeing standards. Enter into such an agreement as soon as possible. 6. Write an Introduction to the Minitrack for the Proceedings ************************************************************* After the authors have been notified of the acceptance of the final version of their manuscript, you are to write a three to four-page introduction to the minitrack for inclusion in the conference proceedings. It should not be an overview of abstracts of the papers, but should introduce the reader to the important problems that exist in the area. 7. Select the Best Paper Candidate from the Manuscripts ******************************************************* Within ten days after you have selected the manuscripts for inclusion in your minitrack, your candidate best paper selections must be forwarded to the Track Coordinator. If you have your own manuscript accepted in your minitrack, make your selection excluding your own work. An external committee will make the selection for the minitrack, considering your manuscript along with the candidates you have provided. ===== END OF CALL ===== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.lang.functional From: cwes1@cs.aukuni.ac.nz (Clare West) Subject: Concurrent Clean file IO concurrently Organization: Computer Science Dept. University of Auckland Date: Fri, 29 Oct 1993 01:49:34 GMT Message-ID: <1993Oct29.014934.2698@cs.aukuni.ac.nz> I hope these are the right places to ask about Clean. I am trying to write a parallel program in Concurrent Clean 0.8.4. This program does some processing in parallel (which works just fine) and then does some transformations on the data and writes it out to file. At first I used one file and even if I asked for parallel execution it all happen on one processor. I thought "Fine only one file is going to force sequential execution anyway, why don't I try writing it out to several files" 2 problems: a) only 4 files can be opened and files can't be closed using the simulator, b) All file outout still happens on only one processor. At the moment I am going to have two versions of the output procedures, one which does the transformation and outputs to file (so I can use the results) and one which does the transformation and throws away the data (so I can do timings). The output to file is only occuring because the simulator has no graphics capability to output to screen so is not an important part of the timings. What I was wondering was if anyone knows if the restrictions I am encountering are real, or if there is a way around them which I haven't found yet. Replies here or in mail would be gratefully accepted. Clare West Auckland University, Computer Science Department. -- Official Welcomer of the RFA; BigSis to Kristiina, Eric and Chris Thinking of Maud you forget everything else. -- hack v1.0.3 Who was that Maud person anyway? -- nethack v3.1.0 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mr824312@pllab1.csie.nctu.edu.tw () Subject: About Massively Parallel Optimizer Date: 29 Oct 1993 04:56:51 GMT Organization: String to put in the Organization Header Can you tell me how to find the paper"Massively parallel searching for better algorithms " by John L. Gustafson and Srinivas Aluru ,Tech. Rep. IS-5088 UC-32 Ames Laboratory ,Iowa State University Dec. 92?Please mail me !! Thanks a lot!! -- Sincerely Yours ----------------------------------------------------------------------- Gwan-Hwan Hwang ( 6@ +a >H ) Department of Computer Science ghhwang@cs.nthu.edu.tw National Tsing Hua University TEL: 886-35-715131-x3900 Hsinchu, Taiwan , R.O.C. 886-35-554147(home) ----------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Fri, 29 Oct 1993 09:06:57 +0000 (GMT) From: hdev@cp.tn.tudelft.nl (Hans de Vreught) Subject: Re: terminology question Sender: news@mo6.rc.tudelft.nl (UseNet News System) Reply-To: J.P.M.deVreught@CP.TN.TUDelft.NL ("Hans de Vreught") Organization: Delft University of Technology (TN FI-CP) References: <1993Oct20.121916.28264@hubcap.clemson.edu> <1993Oct25.154039.10974@hubcap.clemson.edu> hohndel@informatik.uni-wuerzburg.de (Dirk Hohndel) writes: >Greg Wilson (EXP 31 dec 93) (Greg.Wilson@cs.anu.edu.au) wrote: >: I have always used the term "star" to refer to a topology in which every processor >: is connected to every other; however, I am told that the term is also used for >: topologies in which processors 1..N are connected to a distinguished central >: processor 0. Assuming that the latter definition is more common, is there a term >: for topologies of the former type? >what you mean is the Crossbars topology. You can also use "complete graph" instead of "star". -- Hans de Vreught | John von Neumann: J.P.M.deVreught@CP.TN.TUDelft.NL | Young man, in mathematics Delft University of Technology | you don't understand things, The Netherlands | you just get used to them. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pleierc@Informatik.TU-Muenchen.DE (Christoph Pleier) Subject: Distributed programming with C on heterogeneous UNIX-networks! Keywords: parallel programming, distributed programming, Distributed C Originator: pleierc@hpeick9.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Date: Fri, 29 Oct 1993 10:41:18 +0100 Distributed programming with C on heterogeneous UNIX-networks! The Distributed C Development Environment is now availible by anonymous ftp from ftp.informatik.tu-muenchen.de in the directory /local/lehrstuhl/eickel/Distributed_C. The Distributed C Development Environment was developed at Technische Universitaet Muenchen, Germany, at the chair of Prof. Dr. J. Eickel and is a collection of tools for parallel and distributed programming on single- processor-, multiprocessor- and distributed-UNIX-systems, especially on heterogenous networks of UNIX computers. The environment's main purpose is to support and to simplify the development of distributed applications on UNIX networks. It consists of a compiler for a distributed programming language, called Distributed C, a runtime library and several useful tools. The programming model is based on explicit concurrency specification in the programming language DISTRIBUTED C, which is an extension of standard C. The language constructs were mainly taken from the language CONCURRENT C developed by N. Gehani and W. D. Roome and are based on the concepts for parallel programming implemented in the language ADA. Distributed C makes possible the common programming in C together with the user-friendly programming of process management, i. e. the specification, creation, synchronization, communication and termination of concurrently executed processes. The Distributed C Development Environment supports and simplifies the dis- tributed programming in several ways: o Development time is reduced by checking Distributed C programs for errors during compilation. Because of that, errors within communication or synchronization actions can be easier detected and avoided. o Programming is simplified by allowing the use of simple pointer types even on loosely-coupled systems. This is perhaps the most powerful feature of Distributed C. In this way, dynamic structures like chained lists or trees can be passed between processes elegantly and easily - even in heterogeneous networks. Only the anchor of a dynamic structure must be passed to another process. The runtime system automatically allocates heap space and copies the complete structure. o Developement is user-friendly by supporting the generation and installation of the executable files. A special concept was developed for performing the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. o Programming difficulty is reduced by software-aided allocating processes at runtime. Only the system administrator needs to have special knowledge about the target system's hardware. The user can apply tools to map the processes of a Distributed C program to the hosts of a concrete target system. o Execution time is reduced by allocating processes to nodes of a network with a static load balancing strategy. o Programming is simplified because singleprocessor-, multiprocessor- and distributed-UNIX-systems, especially homogeneous and heterogeneous UNIX- networks can be programmed fully transparently in Distributed C. The Distributed C Development Environment consists mainly of the tools: o Distributed C compiler (dcc): compiles Distributed C to standard C. o Distributed C runtime library (dcc.a): contains routines for process creation, synchonization, ... o Distributed C administration process (dcadmin): realizes special runtime features. o Distributed C installer program (dcinstall): performes the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. The environment runs on the following systems: o Sun SPARCstations (SunOS), o Hewlett Packard workstations (HP/UX), o IBM workstations (AIX), o Convex supercomputers (ConvexOS), o IBM Workstations (AIX). o homogeneous and heterogeneous networks of the systems as mentioned above. Moreover the implementation was designed for the use on Intel iPSC/2s. The Distributed C Development Environment source code is provided "as is" as public domain software and distributed in the hope that it will be useful, but without warranty of any kind. -- Christoph Pleier pleierc@informatik.tu-muenchen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gibaud@pulsar.univ-valenciennes.fr (Gibaud Alain) Subject: Informations about Linda ( implementations ) Date: 29 Oct 1993 10:39:18 GMT Organization: Universite des Sciences et Technologie de LILLE, France Sender: gibaud@pulsar.univ-lille1.fr (Gibaud Alain) Nntp-Posting-Host: pulsar.univ-valenciennes.fr Keywords: Linda I need any informations about Linda. I would like to have pointers to papers , implementations ... Where can i get papers ? What are the existing implementations ? Where can we get them ? Is there any free fees programs ? Please send your answer by email. I will post a summary if there are interested people. Thanks in advance . A . Gibaud Universite de Valenciennes LGIL Email : gibaud@pulsar.univ-valenciennes.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: QUERY: Parallel Algorithm Simulators Organization: Professional Student, University of Maryland, College Park Hello, I am looking for references to Parallel Algorithm simulators (parallel programming environments) which run on sequential platforms (UNIX workstations, PC's, etc.) Please email me any responses. Thanks! david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stellner@Informatik.TU-Muenchen.DE (Georg Stellner) Subject: Paragon Message-Passing Environment for Workstations Available Keywords: Paragon, message-passing Originator: stellner@sunbode17.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Date: Fri, 29 Oct 1993 17:22:43 +0100 Reply-To: nxlib@informatik.tu-muenchen.de Paragon Message-Passing Environment for Workstations Available In order to develop applications for Paragon systems and to run Paragon applications on a network of workstations we have developed the NXLIB programming library. We are now releasing V1_0 of the package under the terms of the GNU license agreement to the Paragon and workstation community (currently an implementation for Sun SPARC has been done, but ports to further machines will follow!). The sources of the library and a User's Guide are available via anonymous ftp from ftpbode.informatik.tu-muenchen.de. The related files are located in the NXLIB directory. To establish personal contacts to the authors the nxlib@informatik.tu-muenchen.de email address can be used. Stefan Lamberts, Georg Stellner -- *** Georg Stellner stellner@informatik.tu-muenchen.de *** *** Institut fuer Informatik, SAB phone: +49-89-2105-2689 *** *** Technische Universitaet Muenchen fax: +49-89-2105-8232 *** *** 80290 Muenchen room: S 1211 *** !!! ^^^^^^^^^^^^^^ new address, please update Your archives !!! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Fri, 29 Oct 93 12:35:09 EDT From: toci@dimacs.rutgers.edu (Pat Toci) Subject: HERE IS THE WORKSHOP ANNOUNCEMENT TO SEND OUT FROM DIMACS PRELIMINARY PROGRAM and REGISTRATION INFORMATION DIMACS Workshop on Parallel Algorithms: From Solving Combinatorial Problems to Solving Grand Challenge Problems November 17-19, 1993 In the context of the 1993-94 DIMACS special year on Massively Parallel Computation, a three day workshop on ``Parallel Algorithms: From Solving Combinatorial Problems to Solving Grand Challenge Problems'' will be held on November 17-19, 1994. The focus of the workshop will be the general area of parallel algorithms. The scope includes the study of basic problems in parallel computation, on the one hand, and the relevance of parallel computation to various applications, including the so-called Grand Challenge Problems, on the other hand. PARTICIPATION The workshop will be held at DIMACS at Rutgers University, Piscataway, New Jersey. DIMACS is the National Science Foundation science and technology center for discrete mathematics and computer science. It is a consortium of Rutgers and Princeton Universities, AT&T Bell Laboratories, and Bellcore. Co-organizers for this workshop are: Jim Flanagan (CAIP-Rutgers) [flanagan@caip.rutgers.edu] Yossi Matias (AT&T Bell Labs) [matias@research.att.com] Vijaya Ramachandran (U. Texas) [vlr@cs.utexas.edu] The workshop will include invited presentations and contributed talks. REGISTRATION The DIMACS Conference Center at Rutgers can accommodate about 100 participants. Subject to this capacity constraint, the workshop is open to all researchers. To register, contact Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930]. If possible, please register by NOVEMBER 10, although registration at the conference is permitted. THERE IS NO REGISTRATION FEE. PRELIMINARY PROGRAM Wednesday, November 17 ========================= 8:00 - 8:50 Light Breakfast ------------------------- 8:55 - 9:00 Welcoming remarks from DIMACS 9:00 - 9:30 NSF representative TBA [tentative] 9:30 - 10:00 Gary Miller (CMU) Numeric and Combinatorial Aspects to Parallel Scientific Computation ------------------------- 10:00 - 10:30 Coffee Break ------------------------- 10:30 - 11:00 Richard Cole (NYU) 2-D Pattern Matching 11:00 - 11:30 Uzi Vishkin (Maryland & Tel Aviv) Efficient Labeling of Substrings 11:30 - 12:00 Pierre Kelsen (U British Columbia) Constant Time Parallel Indexing of Points in a Triangle 12:00 - 12:20 Pangfeng Liu (DIMACS) Experiences with Parallel N-body simulations ------------------------- 12:30 - 2:00 LUNCH ------------------------- 2:00 - 2:30 Victor Pan (CUNY) Efficient Parallel computations in Linear Algebra with Applications 2:30 - 2:50 Roland Wunderling (Berlin) On the Impact of Communication Latencies on Distributed Sparse LU Factorization 2:50 - 3:20 Ian Parberry (North Texas U) Algorithms for Touring Knights ------------------------- 3:20 - 3:50 Coffee Break ------------------------- 3:50 - 4:20 Phil Klein (Brown) A Linear-Processor Polylog-Time Parallel Algorithm for Shortest Paths in Planar Graphs 4:20 - 4:50 Edith Cohen (Bell Labs) Undirected Shortest Paths in Polylog-Time and Near-Linear Work 4:50 - 5:20 Lin Chen (USC) Graph Isomorphism and Identification Matrices: Parallel Algorithms ------------------------- 5:30 Wine and Cheese Reception ------------------------- Thursday, November 18 ========================= 8:00 - 8:45 Light Breakfast ------------------------- 8:50 - 9:30 TBA 9:30 - 10:00 Vijaya Ramachandran (U Texas at Austin) Parallel Graph Algorithms: Theory and Implementation ------------------------- 10:00-10:30 Coffee Break ------------------------- 10:30 - 11:00 Zvi Kedem (NYU) Towards High-Performance Fault-Tolerant Distributed Processing 11:00 - 11:30 Torben Hagerup (Max Planck Inst) Fast Deterministic Compaction and its Applications 11:30 - 12:00 Phil Gibbons (Bell Labs) Efficient Low Contention Parallel Algorithms 12:00 - 12:30 Paul Spirakis (Patras) Paradigms for Fast Parallel Approximations for Problems that are Hard to Parallelize ------------------------- 12:30 - 2:00 LUNCH ------------------------- 2:00 - 2:40 Olof Widlund (NYU) Some Recent Results on Schwarz Type Domain Decomposition Algorithms 2:40 - 3:10 Jan Prins (U North Carolina at CH) The Proteus System for the Development of Parallel Algorithms 3:10 - 3:40 Yuefan Deng (SUNY at Stony Brook) Parallel Computing Applied to DNA-protein Interaction Study: A Global Nonlinear Optimization Problem ------------------------- 3:40 - 4:10 Coffee Break ------------------------- 4:10 - 4:40 Rajeev Raman (Maryland) Optimal Parallel Algorithms for Searching a Totally Monotone Matrix 4:40 - 5:10 Teresa Przytycka (Odense) Trade-offs in Parallel Computation of Huffman Tree and Concave Least Weight Subsequence 5:10 - 5:40 Vijay Vazirani (DIMACS & IIT) A Primal-dual RNC Approximation Algorithm for (multi)-Set (multi)-Cover and Covering Integer Programs ------------------------- Friday, November 19 ========================= 8:00 - 8:45 Light Breakfast ------------------------- 8:50 - 9:20 Mike Goodrich (Johns Hopkins) Parallel Methods for Computational Geometry 9:20 - 9:50 Yossi Matias (Bell Labs) Highly Parallel Randomized Algorithms - Some Recent Results 9:50 - 10:10 Dina Kravets (NJIT) All Nearest Smaller Values on Hypercube with Applications ------------------------- 10:10-10:40 Coffee Break ------------------------- 10:40 - 11:10 Mike Atallah (Purdue) Optimal Parallel Hypercube Algorithms for Polygon Problems 11:10 - 11:40 Ernst Mayr (Munich) Optimal Tree Contraction on the Hypercube and Related Network 11:40 - 12:00 David Haglin (Mankato State U) Evaluating Parallel Approximation Algorithms: With a Case Study in Graph Matching 12:00 - 12:20 Jesper Traff (Copenhagen) A Distributed Implementation of an Algorithm for the Maximum Flow Problem ------------------------- 12:20 - 1:50 LUNCH ------------------------- 1:50 - 2:20 Joseph JaJa (Maryland) Efficient Parallel Algorithms for Image Processing 2:20 - 2:50 Rainer Feldmann (Paderborn) Game Tree Search on Massively Parallel Systems 2:50 - 3:10 Stefan Tschoke (Paderborn) Efficient Parallelization of a Branch & Bound Algorithm for the Symmetric Traveling Salesman Problem 3:10 - 3:30 Erik Tarnvik (Umea, Sweden) Solving the 0-1 Knapsack Problem on a Distributed Memory Multicomputer ------------------------- 3:30 - 4:00 Coffee Break ------------------------- 4:00 - 4:30 Aravind Srinivasan (Institute for Advanced Study and DIMACS) Improved parallel algorithms via Approximating Probability Distributions 4:30 - 4:50 Per Laursen (Copenhagen) Parallel Simulated Annealing Using Selection and Migration -- an Approach Inspired by Genetic Algorithms 4:50 - 5:20 Zvi Galil (Columbia U & Tel Aviv) From the CRCW-PRAM to the HCUBE via the CREW-PRAM and the EREW-PRAM or In the Defense of the PRAM ------------------------- TRAVEL AND HOTEL INFORMATION: It is recommended that participants arriving by plane fly into Newark Airport. Flying into Kennedy or La Guardia can add more than an hour to the travel time to DIMACS. DIMACS has successfully and quite pleasantly used the Comfort Inn and the Holiday Inn, both in South Plainfield - they are next to each other. The Comfort Inn gives DIMACS the rate of $40.00 and the Holiday Inn of $60.00 (includes a continental breakfast). The Comfort Inn's # is 908-561-4488. The Holiday Inn's # is 908-753-5500. They both provide free van service to and from DIMACS. If desired, hotel reservations can be made by Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930], the workshop coordinator. She will need to know the date of arrival and departure, which hotel is preferred, and a credit card and expiration number. To travel between Newark Airport and DIMACS/hotels, we recommend ICS Van Service, 1-800-225-4427. The rate is $19.00 per person. It must be booked in advance. From the New York airports, participants may take the Grayline Air (bus) Shuttle (1-800-451-0455) to Newark and then ICS Van service from there. Participants arriving to DIMACS by car will need a parking permit. Parking permits can be obtained in advance by sending email to Pat Toci. Otherwise, they can be obtained any day of the workshop. All workshop events will take place at DIMACS, located in the CoRE Building of Rutgers University, Busch Campus, in Piscataway, New Jersey. For further questions regarding local transportation and accommodations, or to obtain detailed driving directions to the hotels and to DIMACS, contact Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930]. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: drg@cs.city.ac.uk (David Gilbert) Newsgroups: comp.lang.prolog,comp.theory,comp.parallel,sci.logic Subject: Workshop on concurrency in computational logic Date: 29 Oct 1993 17:29:18 -0000 Organization: Computer Science Dept, City University, London Workshop on concurrency in computational logic December 13-14 1993 Department of Computer Science, City University, London, United Kingdom Concurrency is a seminal topic in computer science, and is a research area of growing importance in the computational logic community. The methodologies required to describe, reason about and construct such systems encompass a wide range of viewpoints and some are still quite recent in origin. It is the aim of this workshop to collect researchers together in order to facilitate the exchange of ideas on concurrency in computational logic. Contributions are invited on the following topics: * system specification * semantics and theory * language design * programming methodologies * program analysis and transformation * programming environments Submissions can be extended abstracts or full papers, and should be limited to 15 pages. Electronic submission is preferred, as either LaTeX source or encapsulated postscript. Research students are particularly encouraged to make informal presentations of their research activities, based around a brief abstract, or to submit contributions for a poster display. Submissions should be sent to the following address and should be received by 31 October 1993 Dr. D. Gilbert Department of Computer Science, City University Northampton Square London EC1V 0HB UK email: drg@cs.city.ac.uk Proceedings will be distributed on an informal basis at the workshop to encourage presentation of ongoing work. However, it is intended that selected papers will be published in formal proceedings after the workshop. This workshop is organised jointly by City University and the University of Namur under the auspices of the Association of Logic Programming (UK), the British Council (UK), and the Commissariat General aux Relations Internationales (Belgium). Programme committee: Koen De Bosschere, University of Gent, Belgium David Gilbert, City University, UK Jean-Marie Jacquet, University of Namur, Belgium Luis Monteiro, University of Lisbon, Portugal Catuscia Palamidessi, University of Genova, Italy Jiri Zlatuska, Masaryk University, Czech Republic Important dates: Deadline for paper submission: 31 October 1993 Notification for acceptance of presentation: 22 November 1993 -- D R Gilbert tel: +44-71-477-8444 (direct) Department of Computer Science fax: +44-71-477-8587 City University, Northampton Square email: drg@cs.city.ac.uk Northampton Square, London EC1V 0HB, UK uucp: drg@citycs.uucp Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mayshah@gandalf.rutgers.edu (Mayank Shah) Newsgroups: comp.parallel,comp.protocols.tcp-ip,comp.unix.programmer,comp.unix.questions Subject: Re: Availability of a Distributed Computing Environment API? Date: 30 Oct 93 04:37:25 GMT References: <1993Oct25.154118.11125@hubcap.clemson.edu> Followup-To: comp.parallel Organization: Rutgers Univ., New Brunswick, N.J. blacey@cerf.net (Bruce B. Lacey) writes: >We are in the process of developing an image processing workbench that >will have services distributed amongst multiple UNIX machines. To >support this distributed computing environment (DCE) paradigm (multiple >clients, multiple servers), we need a priority based message passing >mechanism that operates using UNIX sockets. While we can develop our >own, I am sure that we are not alone in this need. I was wondering if >any fellow netters know of a commercial or preferably shareware package >that provides this sort of message passing mechanism over UNIX sockets. I am also intrested in such a thing, so I would appriciate it if you could post your comments on the net. Thanks, Mayank -- ------------------------------------------------------------------------------ Mayank Shah Rutgers University mayshah@gandalf.rutgers.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: elm@cs.berkeley.edu (ethan miller) Subject: Re: The Future of Parallel Computing Date: 30 Oct 93 16:14:50 Organization: Berkeley--Shaken, not Stirred References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> Reply-To: elm@cs.berkeley.edu Nntp-Posting-Host: terrorism.cs.berkeley.edu In-Reply-To: dmb@gorm.lanl.gov's message of Thu, 28 Oct 1993 18:05:21 GMT >>>>> "David" == David M Beazley writes: David> In my opinion, relying on the compiler to do everything for you David> encourages lazy programming and contributes nothing to pushing David> the performance limits of parallel computing hardware or David> software. This statement sounds remarkably like one that might have been made around the time that Fortran was invented. "How do you expect your program to perform well if you let some compiler turn it into assembly? If you want to push the performance limits of your hardware, you *have* to code in assembly." These days, very few people write assembly language for applications on sequential computers. Yes, they could probably make their program run faster if they did. However, the added effort isn't worth the relatively small speedup. The goal of compiling for parallel code should NOT necessarily be "the best possible code;" it should be "reasonably close to the best possible code." ethan -- +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ ethan miller--cs grad student | "Why is it whatever we don't elm@cs.berkeley.edu | understand is called a 'thing'?" #include | -- "Bones" McCoy Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: coats@cardinal.ncsc.org (Carlie Coats) Subject: SGI parallel Fortran -- Help! Reply-To: coats@mcnc.org Organization: North Carolina Supercomputing Center I have a reduction operation I'm trying to get to run in parallel under SGI's Power Fortran (which offers loop-level data parallelism). I've tried several different (IMHO reasonable) variations, but the compiler won't generate parallel output for them. Can someonne help me? The code looks like: ... PARAMETER( NCOLS=35, NROWS=32, NLEVS=6, DTMAX=900.0 ) REAL DT, DTT( NROWS, NCOLS, NLEVS ) INTEGER R, C, L ... ! compute array DTT. Then: DT = DTMAX DO 33 R = 1, NROWS DO 22 C = 1, NCOLS DO 11 L = 1, NLEVS DT = MIN( DT, DTT( L,C,R ) 11 CONTINUE 22 CONTINUE 33 CONTINUE ... For examle, the following variation looks *to* *me* as though it ought to be legitimately parallelizable. But the compiler insists in doing it as scalar code, complaining about a data dependency in DT. What gives?? Please email replies to coats@mcnc.org. ... REAL DTTT( NROWS ) ! used for the reduction. ... ! compute array DTT. Then: DO 33 R = 1, NROWS ! *should* be parallel on R DTTT( R ) = DTMAX DO 22 C = 1, NCOLS DO 11 L = 1, NLEVS DTTT( R ) = MIN( DTTT( R ), DTT( L,C,R ) ) 11 CONTINUE 22 CONTINUE 33 CONTINUE DT = DTTT( 1 ) ! now reduce over R: DO 33 R = 2, NROWS ! this loop is scalar DT = MIN( DT, DTTT( R ) ) 33 CONTINUE ... Thanks in advance. Carlie J. Coats, Jr., Ph.D. phone (919)248-9241 MCNC Environmental Programs fax (919)248-9245 3021 Cornwallis Rd. coats@mcnc.org RTP, NC 27709-2889 xcc@epavax.rtpnc.epa.gov "My opinions are my own, and I've got *lots* of them!" Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hains@iro.umontreal.ca (Gaetan Hains) Subject: professeur a Montreal Sender: news@iro.umontreal.ca Organization: Universite de Montreal, Canada Date: Sun, 31 Oct 1993 15:26:05 GMT Apparently-To: uunet!comp-parallel Mon de'partement recherche des candidats pour un poste de professeur en informatique. Les candidats spe'cialise's en paralle'lisme sont prie's de me contacter tre`s rapidement. Gae'tan Hains De'partement d'informatique et de recherche ope'rationnelle Universite' de Montre'al, C.P.6128 succursale A, Montre'al, Que'bec H3C 3J7 Tel. +1 514 343-5747 | Fax. +1 514 343-5834 | hains@iro.umontreal.ca ------ Departement d'informatique et de recherche operationnelle Priere d'afficher Universite de Montreal Informatique et Recherche operationnelle Le Departement d'informatique et de recherche operationnelle (DIRO) sollicite des candidatures pour un poste de professeur en informatique et un poste de professeur en recherche operationnelle. Du cote informatique, les domaines d'expertise sont le genie logiciel ou le parallelisme. Du cote recherche operationnelle, le domaine d'expertise privilegie est l'etude des aspects stochastiques de la recherche operationnelle, par exemple: modeles stochastiques, simulation et optimisation stochastique. Le DIRO offre des programmes de B.Sc., M.Sc. et Ph.D. en informatique et recherche operationnelle ainsi qu'un B.Sc. specialise bidisciplinaire en mathematiques-informatique. Le DIRO compte 40 professeurs, dont 9 sont en recherche operationnelle. L'Universite de Montreal est la plus grande universite francophone en Amerique. Le DIRO possede des laboratoires de recherche specialises supportes par un reseau integre de 4 serveurs departementaux, de 6 serveurs de laboratoire, de 90 stations Unix (DEC-Alpha, SPARC, Silicon Graphics), et de micros (IBM PC, Macintosh). Les Services Informatiques de l'Universite donnent acces a un reseau de 60 X-Terminaux et 33 stations IRIS-Indigo pour l'enseignement de premier cycle ainsi qu'a 6 super-minis Silicon Graphics pour le calcul intensif. Tous ces reseaux donnent acces a l'Internet. Fonctions: Enseignement aux trois cycles; recherche; direction d'etudiants aux cycles superieurs. Exigences: Doctorat en informatique, recherche operationnelle ou dans un domaine connexe. Traitement: Selon les competences et la convention collective. Entree en fonction: Le 1er juin 1994. Les personnes interessees doivent faire parvenir leur curriculum vitae, le nom de trois repondants et au maximum trois tires a part, au plus tard le 30 novembre 1993 a: Guy Lapalme Directeur Departement d'informatique et de recherche operationnelle Universite de Montreal C.P. 6128, Succ. A Montreal (Quebec) H3C 3J7 Telephone: (514) 343-7090 Telecopieur: (514) 343-5834 e-mail: lapalme@iro.umontreal.ca Conformement aux exigences prescrites en matiere d'immigration au Canada, cette annonce s'adresse, en priorite, aux citoyens canadiens et aux residents permanents. -- Gae'tan Hains De'partement d'informatique et de recherche ope'rationnelle Universite' de Montre'al, C.P.6128 succursale A, Montre'al, Que'bec H3C 3J7 Tel. +1 514 343-5747 | Fax. +1 514 343-5834 | hains@iro.umontreal.ca Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Voting on Newsgroups. I say again, I cannot be the acceptor of your votes. There are rules for forming new newsgroups which require a different method (So I can't stuff the ballot box). Contact the author directly. =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: Biblio update Trick or treat. Treat. I updated the biblio (in the current temporary location) before going on vacation. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ohnielse@fysik.dth.dk (Ole Holm Nielsen) Subject: Re: The Future of Parallel Computing Reply-To: ohnielse@fysik.dth.dk Organization: Physics Department, Techn. Univ. of Denmark References: I often compare the use of computers for scientific purposes to the use of experimental equipment (vacuum chambers, electron guns, what have you). Any good experimentalist knows his equipment well, and knows how to push it to produce the results that he is seeking. The same with computers: A computational scientist has to know his computer sufficiently well to make it produce results (nearly) as efficiently as possible. The scientist will have to push his methods (i.e., codes) whenever he acquires a new hot box. This becomes especially necessary when new breakthroughs appear, such as the parallel computers of today, or the vector processors a decade ago (remember... ?). I believe there is no excuse for ignoring the hardware you use for scientific computing. The scientist must always push ahead the frontiers, and one way is to make the best use of the equipment put at his disposal. A corollary is that "black-box" usage of codes or compilers in scientific computing will often be poor use of resources. I think this is entirely different from commercial software, where black-box usage and portability are important features. Ole H. Nielsen Department of Physics, Building 307 Technical University of Denmark, DK-2800 Lyngby, Denmark E-mail: Ole.H.Nielsen@fysik.dth.dk Telephone: (+45) 45 93 12 22 - 3187 Telefax: (+45) 45 93 23 99 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 1 Nov 1993 09:31:20 GMT From: I Flockhart Subject: Info on out of core parallel processing? Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre I'm currently looking into the parallelisation of global and/or non-regular (3d) image processing operations, where the images concerned do not fit into distributed core memory. A typcial example might be the Fourier transform. I'm interested in collecting references for any literature that may help with this topic, and also to hear from anyone who has worked on similar problems in the past. If you can assist with either a reference or the benefit of experience, please email me directly at: ianf@epcc.ed.ac.uk I'll collate any references I get and post them back to comp.parallel at a later date. Thanks Ian ---------------------------------------------------------------------- | e|p Edinburgh Parallel Computing Centre | | c|c University of Edinburgh | ---------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Need Replacement Moderator for Comp.Parallel I've been the moderator of this newsgroup since 1987. I think it's about time for me to move on. My professional duties are catching up with me. Therefore, I'd like someone to volunteer to be moderator. The job is not hard---it takes (on most days) much less than an hour a day. Mondays are the worst and that takes about 15-30 minutes to get everything posted. Moderating is very important. It keeps the tone professional and it supports a thirty-five member mailing list. We have readers who do not have net access all over Asia, Europe, and the Americas---the moderating sets the tone for services and discussions. If you're SERIOUSLY interested, contact me. If no one offers, the group will become unmoderated at the end of the year. It's been fun and informative. Thanks for you patience. Steve =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Clemson Having Catastrophic Hardware Problems. No news for a while Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm From: c0031010@ws.rz.tu-bs.de (Josef Schuele) Subject: PARMACS-PVM Message-ID: Sender: postnntp@ibr.cs.tu-bs.de (Mr. Nntp Inews Entry) Organization: TU Braunschweig, Rechenzentrum (ANW) , Germany. Date: Mon, 1 Nov 1993 15:18:51 GMT Hi, does anybody has any numbers comparing PVM and PARMACS ? Thanks, Josef Schuele Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: 3 Nov 1993 16:12:59 GMT From: angelo@carie.mcs.mu.edu (Angelo Gountis) Subject: AI and Parallel Machines Organization: Marquette University - Dept. Math, Statistics, & Comp. Sci. Hello All, I am looking for references regrading the impact parallel processing has had on projects involving AI. I realize this is rather vague but I have not been able to narrow it down much from the information I have found as of now. I want to approach this from the angle of what parallel processing has allowed AI to achieve that would not be fessible/possible with out it. thanks for any help. angelo gounti@studsys.mscs.mu.edu NeXT Mail: angelo@carie.mcs.mu.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: edgar@math.ohio-state.edu (Gerald Edgar) Newsgroups: sci.math,comp.theory,comp.parallel Subject: Re: Journal Article Acceptance timings Date: 1 Nov 1993 08:12:16 -0500 Organization: The Ohio State University, Dept. of Math. References: <1993Oct27.184815.29227@hubcap.clemson.edu> The Notices of the Amer Math Soc publishes once a year a survey of the backlogs of the mathematics journals. The most recent one is in the April, 1993, issue. The median observed waiting time ranged from 4 months to 33 months. -- Gerald A. Edgar Internet: edgar@math.ohio-state.edu Department of Mathematics Bitnet: EDGAR@OHSTPY The Ohio State University telephone: 614-292-0395 (Office) Columbus, OH 43210 -292-4975 (Math. Dept.) -292-1479 (Dept. Fax) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Manu Thapar Subject: final CFP: Shared Memory Architectures Workshop CALL FOR PARTICIPATION International Workshop on Support for Large-Scale Shared Memory Architectures to be held in conjunction with 8th International Parallel Processing Symposium April 26-29, 1994 Hotel Regina Cancun, Mexico Sponsored by IEEE Technical Committee on Computer Architecture IEEE Technical Committee on Parallel Processing SYMPOSIUM: The eighth annual International Parallel Processing Symposium (IPPS '94) will be held April 26-29, 1994 at the Hotel Regina, Cancun, Mexico. The symposium is sponsored by the IEEE Computer Society's Technical Committee on Parallel Processing and will be held in cooperation with ACM SIGARCH. IPPS '94 is a forum for engineers and scientists from around the world to present the latest research findings in all aspects of parallel processing. WORKSHOP: The first annual workshop on Scalable Shared Memory Systems is sponsored by the Technical Committee on Computer Architecture and the Technical Committee on Parallel Processing, which were established by the IEEE Computer Society to advance knowledge in all areas of computer architecture, parallel processing, and related technologies. The workshop will be held on the first day of the Symposium (April 26). Shared memory multiprocessors are quickly becoming an important form of computer systems. The single address space provided by these architectures provide the user with an easier programming model. Many commercial systems have been announced, or are under development. Though shared memory systems provide a simple programming paradigm, the architectural design of a scalable system has interesting alternatives, some of which increase the hardware complexity. Techniques that can improve the performance of the applications via software, are also very interesting. This workshop provides a forum for researchers and commercial developers to meet and discuss the various hardware and software issues involved in the design and use of scalable shared memory multiprocessors. Software methods, such as virtual shared memory and compiler support, are of special interest to this workshop. Papers are invited in the areas such as (but not limited to) the following: - Architectures for scalable shared memory systems - Virtual shared memory - Hardware techniques for scalable shared memory - Operating system support - Compiler optimization and support - Application optimization and language support - Performance evaluation via traces, analytical modeling or emulation - Memory organization - Interconnect topology - Modeling techniques To submit an original research paper, send four copies of your complete manuscript (not to exceed 15 single-spaced pages of text using point size 12 type on 8 1/2 X 11 inch pages) to the first co-chair. References, figures, tables, etc. may be included in addition to the fifteen pages of text. Please include your postal address, e-mail address, and telephone and fax numbers. All manuscripts will be reviewed. Manuscripts must be received by November 12, 1993. Notification of review decisions will be mailed by December 31, 1993. Camera-ready papers are due January 31, 1994. Proceedings will be available at the Symposium. Workshop Co-Chairs: Manu Thapar Kai Li Hewlett Packard Research Labs Department of Computer Science 1501 Page Mill Road Princeton University Palo Alto, CA 94304 Princeton, NJ 08544 Internet: thapar@hplabs.hp.com Internet: li@cs.princeton.edu phone: 415-857-6284 phone: 609-258-4637 fax: 415-857-8526 fax: 609-258-1771 James R. Goodman Computer Sciences Department University of Wisconsin-Madison 1210 West Dayton Street Madison, WI 53706 Internet: goodman@cs.wisc.edu phone: 608-262-0765 Don't Miss IPPS '94 In Cancun, Mexico! The Yucatan peninsula with a shoreline of over 1600 kilometers is one of Mexico's most exotic areas. Over a thousand years ago the peninsula was the center of the great Mayan civilization. Cancun with it's powder fine sand and turquoise water is a scenic haven for sun lovers and archaeological buffs alike, and our Mexican hosts are eager to extend every hospitality for our visit to their part of the world. Air travel to Cancun is available from most major U.S. cities, and U.S. and Canadian citizens do not require passports to visit Mexico. The Hotel Regina is a self-contained meeting facility with spacious, air-conditioned rooms, on-site restaurants, and all the services of a world class hotel. Cancun is a dazzling resort with golf, tennis, and every water sport under the sun, and the area offers exciting nightlife, fabulous shopping, and historic Mayan ruins. Join us in Cancun where once again researchers, system developers, and users from around the world will convene to present leading developments in parallel processing and related applications. As in the last 2 years, the first day of the symposium will feature workshops and tutorials. In addition to technical sessions, the remaining 3 days will include the previously well-received parallel systems fair as well as commercial participation. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Suresh Damodaran Kamal Subject:Parallel Programming Paradigms:Question I am looking for the "definitions" of parallel paradigms. For example, consider master-slave. Can the slaves talk to each other? Or does the paradigm imply only that the master can "kill" slaves? Any pointers to literature will be helpful. -Suresh -- ======================================================================== Suresh Damodaran-Kamal Holler: 318-231-5839(O) P.O.Box 42322 318-269-9787(H) Lafayette, LA 70504 Email:skd@cacs.usl.edu -Laughter reduces entropy of mind -:))))) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: prakash@cis.ohio-state.edu (ravi prakash) Subject: code size vs. data size Date: 1 Nov 1993 19:32:06 -0500 Organization: The Ohio State University Dept. of Computer and Info. Science Could anybody provide me with references to works determining the ratio of code size to data size for a wide variety of applications? For example, I have come across simulations of certain atmospheric effects where the code size(in megabytes) is about 200 times the size of the data (megabytes) it works on. On the other hand, some SIMD implementations have data size comparable to or greater than the size of the code. If you could tell me the ratio of code size and data size for some implementations in the Grand Challenge Problems' domain too, along with the language(s) and machine(s) used for the implementation, it would be a great help. Thanks, Ravi Prakash ---------------------------------------------- Department of Computer and Information Science The Ohio State University, Columbus, OH 43210. ---------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kerry@totara.cs.waikato.ac.nz (Kerry Guise) Subject: barrier synchronisation Date: 2 Nov 1993 04:33:29 GMT Organization: The University of Waikato Nntp-Posting-Host: totara.cs.waikato.ac.nz Keywords: barrier ksr I'm wondering if anyone can help me with the barrier synchronisation functions which are extensions of the KSR implementation of the pthreads library. I'm trying to port a program written for the KSR to Solaris 2.x and I've run up against this barrier :-). How easy would it be to implement my own barrier synchronisation mechanism using standard tools such as mutexes, cv's etc ? Can anyone suggest some papers I could read on the subject ? Thanks in advance, Kerry Guise Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ken Thomas Subject: Announcement---Applications for High Performance Computers Date: 2 Nov 1993 12:10:34 -0000 Organization: Electronics and Computer Science, University of Southampton Applications for High Performance Computers Soultz, France Date: Dec 14th-17th, 1993 The aim of this course is to understand some aspects of current applications of high performance computers. There are three main objectives: 1. To give an overview of parallel hardware and software and to explore the role of performance critical parameters. Matrix kernels are also explored. 2. To give awareness of the tools that are likely to be important in the future. This includes HPF (High performance Fortran) and the message passing standards. 3. To put together applications in diverse areas of science and engineering. There are speakers on seismic modelling, CFD, Structural Analysis, Molecular dynamics and climate modelling. Programme (provisional). Day 1 14.00 Start Introduction and Welcome Session 1 Overview Introduction to Parallel Hardware Introduction to Parallel Software Panel Discussion Day 2 Start 09.30 Session 2 Performance Characterization Low-level Benchmarks and Performance Critical Parameters CFD Session 3 Applications I Seismic Modelling Climate Modelling Panel Discussion Day 3 Start 9.30 Session 4 HPC Standards HPF Message-Passing Interface Session 5 Parallel Linear Algebra Structural Analysis Panel Discussion Day 4 Start 09.00 Session 6 The Parkbench Initiative Grand Challenge Applications Panel Discussion. Close 12.15 Cost 375 pounds sterling (Full Rate) 275 pounds sterling for academic participants and members of ACT costs include lunch and refreshments throughout the day. Minimum numbers 10 This course cannot be given unless there is a minimum of 10 participants. It will be necessary to receive the your registration no later than Monday 6th December, 1993. Should the course not run, then all registration fees will be returned. Applications for High Performance Computers Soultz, France Date: November 2nd, 3rd, 4th and 5th, 1993 Applications for High Performance Computing Registration Form Title . . . . . . . . . . . . . . . . . Surname . . . . . . . . . . . . . . . . First Name . . . . . . . . . . . . . . . Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tel: . . . . . . . . . . . . . . . .. Fax: . . . . . . . . . . . . . . . . . I enclose a cheque in the sum of . . . . . . . . . . . . . . . . . . Made Payable to a"University of Southampton". Please forward cheque and registration to Telmat Informatique. Venue: Telmat Informatique Z.1. - 6 Rue de l'industrie, B P 12 68360 Soultz Cedex France Local Accommodation Arrangements contact: Rene Pathenay/Francoise Scheirrer Telmat Informatique Tel: 33 89 765110 Fax: 33 89 742734 Email: pathenay@telmat.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ohnielse@fysik.dth.dk (Ole Holm Nielsen) Subject: Re: The Future of Parallel Computing Reply-To: ohnielse@fysik.dth.dk Organization: Physics Department, Techn. Univ. of Denmark References: I often compare the use of computers for scientific purposes to the use of experimental equipment (vacuum chambers, electron guns, what have you). Any good experimentalist knows his equipment well, and knows how to push it to produce the results that he is seeking. The same with computers: A computational scientist has to know his computer sufficiently well to make it produce results (nearly) as efficiently as possible. The scientist will have to push his methods (i.e., codes) whenever he acquires a new hot box. This becomes especially necessary when new breakthroughs appear, such as the parallel computers of today, or the vector processors a decade ago (remember... ?). I believe there is no excuse for ignoring the hardware you use for scientific computing. The scientist must always push ahead the frontiers, and one way is to make the best use of the equipment put at his disposal. A corollary is that "black-box" usage of codes or compilers in scientific computing will often be poor use of resources. I think this is entirely different from commercial software, where black-box usage and portability are important features. Ole H. Nielsen Department of Physics, Building 307 Technical University of Denmark, DK-2800 Lyngby, Denmark E-mail: Ole.H.Nielsen@fysik.dth.dk Telephone: (+45) 45 93 12 22 - 3187 Telefax: (+45) 45 93 23 99 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 1 Nov 1993 09:31:20 GMT From: I Flockhart Subject: Info on out of core parallel processing? Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre I'm currently looking into the parallelisation of global and/or non-regular (3d) image processing operations, where the images concerned do not fit into distributed core memory. A typcial example might be the Fourier transform. I'm interested in collecting references for any literature that may help with this topic, and also to hear from anyone who has worked on similar problems in the past. If you can assist with either a reference or the benefit of experience, please email me directly at: ianf@epcc.ed.ac.uk I'll collate any references I get and post them back to comp.parallel at a later date. Thanks Ian ---------------------------------------------------------------------- | e|p Edinburgh Parallel Computing Centre | | c|c University of Edinburgh | ---------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 02 Nov 1993 14:46:18 -0600 (MDT) From: slhpv@cc.usu.edu Subject: Compiler producing CDG and Def-Use chains Message-ID: <1993Nov2.144539.3037@cc.usu.edu> Organization: Utah State University I am doing some research on parallelizing sequential code. For this work I need both the Control-Dependency Graph and Definition-Use chains for the code. Are their any compilers available which produce this information for a subset of C, Fortran, or Pascal? Any pointers appreciated. David Dunn Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: elm@cs.berkeley.edu (ethan miller) Subject: Re: The Future of Parallel Computing Date: 30 Oct 93 16:14:50 Organization: Berkeley--Shaken, not Stirred References: <1993Oct14.131912.9126@hubcap.clemson.edu> <1993Oct19.154929.15823@hubcap.clemson.edu> Reply-To: elm@cs.berkeley.edu Nntp-Posting-Host: terrorism.cs.berkeley.edu In-Reply-To: dmb@gorm.lanl.gov's message of Thu, 28 Oct 1993 18:05:21 GMT >>>>> "David" == David M Beazley writes: David> In my opinion, relying on the compiler to do everything for you David> encourages lazy programming and contributes nothing to pushing David> the performance limits of parallel computing hardware or David> software. This statement sounds remarkably like one that might have been made around the time that Fortran was invented. "How do you expect your program to perform well if you let some compiler turn it into assembly? If you want to push the performance limits of your hardware, you *have* to code in assembly." These days, very few people write assembly language for applications on sequential computers. Yes, they could probably make their program run faster if they did. However, the added effort isn't worth the relatively small speedup. The goal of compiling for parallel code should NOT necessarily be "the best possible code;" it should be "reasonably close to the best possible code." ethan -- +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ ethan miller--cs grad student | "Why is it whatever we don't elm@cs.berkeley.edu | understand is called a 'thing'?" #include | -- "Bones" McCoy Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: coats@cardinal.ncsc.org (Carlie Coats) Subject: SGI parallel Fortran -- Help! Reply-To: coats@mcnc.org Organization: North Carolina Supercomputing Center I have a reduction operation I'm trying to get to run in parallel under SGI's Power Fortran (which offers loop-level data parallelism). I've tried several different (IMHO reasonable) variations, but the compiler won't generate parallel output for them. Can someonne help me? The code looks like: ... PARAMETER( NCOLS=35, NROWS=32, NLEVS=6, DTMAX=900.0 ) REAL DT, DTT( NROWS, NCOLS, NLEVS ) INTEGER R, C, L ... ! compute array DTT. Then: DT = DTMAX DO 33 R = 1, NROWS DO 22 C = 1, NCOLS DO 11 L = 1, NLEVS DT = MIN( DT, DTT( L,C,R ) 11 CONTINUE 22 CONTINUE 33 CONTINUE ... For examle, the following variation looks *to* *me* as though it ought to be legitimately parallelizable. But the compiler insists in doing it as scalar code, complaining about a data dependency in DT. What gives?? Please email replies to coats@mcnc.org. ... REAL DTTT( NROWS ) ! used for the reduction. ... ! compute array DTT. Then: DO 33 R = 1, NROWS ! *should* be parallel on R DTTT( R ) = DTMAX DO 22 C = 1, NCOLS DO 11 L = 1, NLEVS DTTT( R ) = MIN( DTTT( R ), DTT( L,C,R ) ) 11 CONTINUE 22 CONTINUE 33 CONTINUE DT = DTTT( 1 ) ! now reduce over R: DO 33 R = 2, NROWS ! this loop is scalar DT = MIN( DT, DTTT( R ) ) 33 CONTINUE ... Thanks in advance. Carlie J. Coats, Jr., Ph.D. phone (919)248-9241 MCNC Environmental Programs fax (919)248-9245 3021 Cornwallis Rd. coats@mcnc.org RTP, NC 27709-2889 xcc@epavax.rtpnc.epa.gov "My opinions are my own, and I've got *lots* of them!" Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stgprao@st.unocal.COM (Richard Ottolini) Subject: Re: CS 6400 Message-ID: <1993Nov1.160439.1200@unocal.com> Sender: news@unocal.com (Unocal USENET News) Organization: Unocal Corporation References: <1993Oct27.172532.20787@hubcap.clemson.edu> Date: Mon, 1 Nov 1993 16:04:39 GMT Even though this system offsers 16GB memory, does it have greater than 32 bits software addressing? Many (not all) of the "preview" talks we've had are disappointing in this regard. The default seems to be a distributed memory model of 32 bits. We could use 42 bits immediately to describe our larger data files and have been suggesting 48 bit global addressing. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sims@gehrig.ucr.edu (david sims) Subject: Seeking refs on detecting race conditions at run time Organization: University of California, Riverside (College of Engineering/Computer Science) Nntp-Posting-Host: gehrig.ucr.edu hi all, I'm looking for references for systems that detect race conditions at run time. I believe these systems are a debugging aid. The parallel programmer writes his program, and if a race condition occurs during testing, these systems will detect it. I know there's a large body of research in this area. I just don't know where to get started. Thanks for any help/bibliographies/references. -- David L. Sims Department of Computer Science sims@cs.ucr.edu University of California +1 (909) 787-6437 Riverside CA 92521-0304 PGP encryption key available on request. USA Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,comp.theory,comp.org.ieee,info.theorynt From: das@ponder.csci.unt.edu (Sajal Das) Subject: Call for Papers Sender: usenet@mercury.unt.edu (UNT USENet Adminstrator) Organization: University of North Texas, Denton Date: Mon, 1 Nov 1993 18:50:09 GMT Apparently-To: hypercube@hubcap.clemson.edu ******************* * CALL FOR PAPERS * ******************* JOURNAL OF COMPUTER & SOFTWARE ENGINEERING -------------------------------------------- SPECIAL ISSUE on PARALLEL ALGORITHMS & ARCHITECTURES (Tentative Publication Date: January 1995) Due to fundamental physical limitations on processing speeds of sequential computers, the future-generation high performance computing environment will eventually rely entirely on exploiting the inherent parallelism in problems and implementing their solutions on realistic parallel machines. Just as the processing speeds of chips are approaching their physical limits, the need for faster computations is increasing at an even faster rate. For example, ten years ago there was virtually no general-purpose parallel computer available commercially. Now there are several machines, some of which have received wide acceptance due to reasonable cost and attractive performance. The purpose of this special issue is to focus on the desgin and analysis of efficient parallel algorithms and their performance on different parallel architectures. We expect to have a good blend of theory and practice. In addition to theoretical papers on parallel algorithms, case studies and experience reports on applications of these algorithms in real-life problems are especially welcome. Example topics include, but are not limited to, the following: Parallel Algorithms and Applications. Machine Models and Architectures. Communication, Synchronization and Scheduling. Mapping Algorithms on Architectures. Performance Evaluation of Multiprocessor Systems. Parallel Data Structures. Parallel Programming and Software Tools. *********************************************************************** Please submit SEVEN copies of your manuscript to either of the * Guest Editors by May 1, 1994: * * *********************************************************************** Professor Sajal K. Das || Professor Pradip K. Srimani * Department of Computer Science || Department of Computer Science * University of North Texas || Colorado State University * Denton, TX 76203 || Ft. Collins, CO 80523 * Tel: (817) 565-4256, -2799 (fax) || Tel: (303) 491-7097, -6639 (fax) * Email: das@cs.unt.edu || Email: srimani@CS.ColoState.Edu * *********************************************************************** INSTRUCTIONS FOR SUBMITTING PAPERS: Papers should be 20--30 double spaced pages including figures, tables and references. Papers should not have been previously published, nor currently submitted elsewhere for publication. Papers should include a title page containing title, authors' names and affiliations, postal and e-mail addresses, telephone numbers and Fax numbers. Papers should include a 300-word abstract. If you are willing to referee papers for this special issue, please send a note with research interest to either of the guest editors. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dowd@mangrove.eng.buffalo.edu (Patrick Dowd) Subject: CFP - SIGCOMM'94 Keywords: CFP Reply-To: dowd@eng.buffalo.edu (Patrick Dowd) Organization: State University of New York at Buffalo Posting-Software: pnews (version 1.4) Date: Mon, 1 Nov 1993 20:30:22 GMT Apparently-To: comp-parallel@cis.ohio-state.edu Call for Papers ACM SIGCOMM'94 CONFERENCE Communications Architectures, Protocols and Applications University College London London, UK August 31 to September 2, 1994 (Tutorials and Workshop, August 30) An international forum on communication network applications and technologies, architectures, protocols, and algorithms. Authors are invited to submit full papers concerned with both theory and practice. The areas of interest include, but are not limited to: -- Analysis and design of computer network architectures and algorithms, -- Innovative results in local area networks, -- Mixed-media networks, -- High-speed networks, routing and addressing, support for mobile hosts, -- Resource sharing in distributed systems, -- Network management, -- Distributed operating systems and databases, -- Protocol specification, verification, and analysis. A single-track, highly selective conference where successful submissions typically report results firmly substantiated by experiment, implementation, simulation, or mathematical analysis. General Chair: Jon Crowcroft, University College London Program Chairs: Stephen Pink, Swedish Institute of Computer Science Craig Partridge, BBN Publicity Chair: Patrick Dowd, State University of New York at Buffalo Local Arrangements Chair: Soren-Aksel Sorensen, University College London Papers must be less than 20 double-spaced pages long, have an abstract of 100-150 words, and be original material that has not been previously published or be currently under review with another conference or journal. In addition to its high quality technical program, SIGCOMM '94 will offer tutorials by noted instructors such as Paul Green and Van Jacobson (tentative), and a workshop on distributed systems led by Derek McAuley. Important Dates: Paper submissions: 1 February 1994 Tutorial proposals: 1 March 1994 Notification of acceptance: 2 May 1994 Camera ready papers due: 9 June 1994 All submitted papers will be judged based on their quality and relevance through double-blind reviewing where the identities of the authors are withheld from the reviewers. Authors names should not appear on the paper. A cover letter is required that identifies the paper title and lists the name, affiliation, telephone number, email, and fax number of all authors. Authors of accepted papers need to sign an ACM copyright release form. The Proceedings will be published as a special issue of ACM SIGCOMM Computer Communication Review. The program committee will also select a few papers for possible publication in the IEEE/ACM Transactions on Networking. Submissions from North America should be sent to: Craig Partridge BBN 10 Moulton St Cambridge MA 02138 All other submissions should be sent to: Stephen Pink Swedish Institute of Computer Science Box 1263 S-164 28 Kista Sweden Five copies are required for paper submissions. Electronic submissions (uuencoded, compressed postscript) should be sent to each program chair. Authors should also e-mail the title, author names and abstract of their paper to each program chair and identify any special equipment that will be required during its presentation. Due to the high number of anticipated submissions, authors are encouraged to strictly adhere to the submission date. Contact Patrick Dowd at dowd@eng.buffalo.edu or +1 716 645-2406 for more information about the conference. Student Paper Award: Papers submitted by students will enter a student-paper award contest. Among the accepted papers, a maximum of four outstanding papers will be awarded full conference registration and a travel grant of $500 US dollars. To be eligible the student must be the sole author, or the first author and primary contributor. A cover letter must identify the paper as a candidate for this competition. Mail and E-mail Addresses: General Chair Jon Crowcroft Department of Computer Science University College London London WC1E 6BT United Kingdom Phone: +44 71 380 7296 Fax: +44 71 387 1397 E-Mail: J.Crowcroft@cs.ucl.ac.uk Program Chairs Stephen Pink (Program Chair) Swedish Institute of Computer Science Box 1263 S-164 28 Kista Sweden Phone: +46 8 752 1559 Fax: +46 8 751 7230 E-mail: steve@sics.se Craig Partridge (Program Co-Chair for North America) BBN 10 Moulton St Cambridge MA 02138 Phone: +1 415 326 4541 E-mail: craig@bbn.com Publicity Chair Patrick Dowd Department of Electrical and Computer Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260-2050 Phone: +1 716 645 2406 Fax: +1 716 645 3656 E-mail: dowd@eng.buffalo.edu Local Arrangements Chair Soren-Aksel Sorensen Department of Computer Science University College London London WC1E 6BT United Kingdom Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hains@iro.umontreal.ca (Gaetan Hains) Subject: professeur a Montreal Sender: news@iro.umontreal.ca Organization: Universite de Montreal, Canada Date: Sun, 31 Oct 1993 15:26:05 GMT Apparently-To: uunet!comp-parallel Mon de'partement recherche des candidats pour un poste de professeur en informatique. Les candidats spe'cialise's en paralle'lisme sont prie's de me contacter tre`s rapidement. Gae'tan Hains De'partement d'informatique et de recherche ope'rationnelle Universite' de Montre'al, C.P.6128 succursale A, Montre'al, Que'bec H3C 3J7 Tel. +1 514 343-5747 | Fax. +1 514 343-5834 | hains@iro.umontreal.ca ------ Departement d'informatique et de recherche operationnelle Priere d'afficher Universite de Montreal Informatique et Recherche operationnelle Le Departement d'informatique et de recherche operationnelle (DIRO) sollicite des candidatures pour un poste de professeur en informatique et un poste de professeur en recherche operationnelle. Du cote informatique, les domaines d'expertise sont le genie logiciel ou le parallelisme. Du cote recherche operationnelle, le domaine d'expertise privilegie est l'etude des aspects stochastiques de la recherche operationnelle, par exemple: modeles stochastiques, simulation et optimisation stochastique. Le DIRO offre des programmes de B.Sc., M.Sc. et Ph.D. en informatique et recherche operationnelle ainsi qu'un B.Sc. specialise bidisciplinaire en mathematiques-informatique. Le DIRO compte 40 professeurs, dont 9 sont en recherche operationnelle. L'Universite de Montreal est la plus grande universite francophone en Amerique. Le DIRO possede des laboratoires de recherche specialises supportes par un reseau integre de 4 serveurs departementaux, de 6 serveurs de laboratoire, de 90 stations Unix (DEC-Alpha, SPARC, Silicon Graphics), et de micros (IBM PC, Macintosh). Les Services Informatiques de l'Universite donnent acces a un reseau de 60 X-Terminaux et 33 stations IRIS-Indigo pour l'enseignement de premier cycle ainsi qu'a 6 super-minis Silicon Graphics pour le calcul intensif. Tous ces reseaux donnent acces a l'Internet. Fonctions: Enseignement aux trois cycles; recherche; direction d'etudiants aux cycles superieurs. Exigences: Doctorat en informatique, recherche operationnelle ou dans un domaine connexe. Traitement: Selon les competences et la convention collective. Entree en fonction: Le 1er juin 1994. Les personnes interessees doivent faire parvenir leur curriculum vitae, le nom de trois repondants et au maximum trois tires a part, au plus tard le 30 novembre 1993 a: Guy Lapalme Directeur Departement d'informatique et de recherche operationnelle Universite de Montreal C.P. 6128, Succ. A Montreal (Quebec) H3C 3J7 Telephone: (514) 343-7090 Telecopieur: (514) 343-5834 e-mail: lapalme@iro.umontreal.ca Conformement aux exigences prescrites en matiere d'immigration au Canada, cette annonce s'adresse, en priorite, aux citoyens canadiens et aux residents permanents. -- Gae'tan Hains De'partement d'informatique et de recherche ope'rationnelle Universite' de Montre'al, C.P.6128 succursale A, Montre'al, Que'bec H3C 3J7 Tel. +1 514 343-5747 | Fax. +1 514 343-5834 | hains@iro.umontreal.ca Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Voting on Newsgroups. I say again, I cannot be the acceptor of your votes. There are rules for forming new newsgroups which require a different method (So I can't stuff the ballot box). Contact the author directly. =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: Biblio update Trick or treat. Treat. I updated the biblio (in the current temporary location) before going on vacation. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Bodo.Parady@Eng.Sun.COM (Bodo Parady - SMCC Systems Performance) Subject: The PAR93 SMP Benchmark Charter: PAR93 -- is a benchmark suite designed to measure cache based RISC SMP system performance using well behaved codes that parallelize automatically. While no one benchmark can fully characterize performance, it is hoped the results of a variety of realistic benchmarks can give valuable insight into expected real performance. PAR93 and its use in the performance analysis of SMPs will be discussed at: Supercomputing 93 Room A-107, Portland Convention Center 4:30-6:30 Tuesday, November 16, 1993 1.800.go2.sc93 (1.800.462.7293) Papers will be delivered by Mike Humphrey of SGI, Vinod Grover of Sun, Bodo Parady of Sun, and Bruce Leasure of Kuck and Associates. This will be followed by a panel discussion which, in addition to the presenters, includes Forrest Baskett of SGI, and John Hennessey of Stanford. Focus -- This suite is for users of small to medium (<64 cpus) symmetric multiprocessing systems who wish to compare the performance of various systems, but who do not wish to change source code to attain parallel speedups. As such, the benchmarks allow for no tuning of the code beyond the base, and no addition of compiler directives. It is assumed that code tuning through source code changes and the addition of compiler directives can attain greater speedups, but this is not the scope of this benchmark. Neither are distributed memory, vector architectures, nor massively parallel systems within the goals of this benchmark, although nothing in the suite prevents any system architecture from being used; other architectures are encouraged to run this benchmark. Vector and MP vector architectures may find that this suite offers an excellent method to evaluate the performance of their systems and are heartily encouraged to run this suite. This suite is intended to characterize the subset of applications for which compilers can parallelize automatically (compilation without the use of directives). As such this suite offers examples of parallelizable code and an indication of the speedups attainable from compilers on some applications. This benchmark offers users, system designers and compiler writers a method to compare their products on a completely level playing field, with no parallel programming paradigm favored. PAR93 is based on codes from SPEC CFP92, Perfect Club, NASA Ames, ARCO, We gratefully acknowledge the pioneering work by David Bailey, George Cybenko, Dave Schneider, and Wolgang Gentzsch, among others, and solicit their guidance in continued evolution of the suite. Q: What equipment do I need to run PAR93? A: Here is a minimum configuration: 320 MB Main Memory 500 MB Swap disk 1 processor An OS A Fortran compiler Here is a suggested configuration: 500 MB Main Memory 1 GB swap disk 2 processors An OS capable of serving multiple OS's (eg. Unix) An MP Fortran compiler Q: Why is it named PAR93 and not SPECpar93? A: It has not been approved by SPEC. PAR93 has been submitted to SPEC for further development. Before release from SPEC it will undergo a rigorous approval process to ensure it is truly portable, and provides a level playing field for comparison of certain types of systems. It could become SPECpar94. Q: Will PAR93 indicate the performance of SPECpar94? A: Not necessarily. During the SPEC development process, benchmarks may be added, dropped, or modified. Q: I have strong opinions on what benchmarks ought to be added (dropped, or modified). How can I influence the process? A: Get involved. Bring your ideas, your data and analysis, your time and energy to SPEC. Consider joining SPEC. Write to the following address for more information: SPEC [Systems Performance Evaluation Corporation] c/o NCGA [National Computer Graphics Association] 2722 Merrilee Drive Suite 200 Fairfax, VA 22031 USA Phone: +1-703-698-9600 Ext. 318 FAX: +1-703-560-2752 E-Mail: spec-ncga@cup.portal.com For technical questions regarding the SPEC benchmarks (e.g., problems with execution of the benchmarks), Dianne Dean (she is the person normally handling SPEC matters at NCGA) refers the caller to an expert at a SPEC member company. Q: What are the run rules: A: The run rules are identical to the SPEC run rules, that is to say: . No code changes, except for portability . No compiler directives . No benchmark specific optimizations All but the last are easy to enforce and understand. Q: Why are compiler directives not allowed, since with my parallel system, the addition of one directive would improve performance of typical user codes by a great deal, and it only takes a couple of minutes to put in the directives, and everyone does it. A: All of this may be true, but PAR93 has been constructed with with the goal of measuring parallel system performance, where a system includes the cpu, cache, memory system, OS, and compiler. The goal is measurement of system performance using codes that parallelize automatically. Many codes that parallelize well have been excluded for PAR93 because in order to parallelize well, they require the intervention of the programmer. Due to the availability of many codes that present day compilers can parallelize, the run rules were restricted to those that compilers can deal with. This eliminates human factors in the comparison of various systems. Most users will find that the codes provided demonstrate excellent speedups on their SMP system. From this he might be able to infer a little about what he can do by tuning. Barring compiler directives levels the playing field, since no directive format is favored and no types of directives are favored. It also spares performance engineers the the career agony of constantly tuning benchmarks. There have been other benchmarks that try to deal with user tuning like Perfect, and others that try to measure unrestricted CPU performance like the TPP linpack, but PAR93 is a benchmark that measuress the compiler, and OS in addition to the effects of the hardware. Q: Is automatic parallelization mature enough to be the basis of a benchmark suite? A: Automatic parallelization has been commercially available since 1986 on Sequent, and possibly earlier on Alliant machines (their first machine shipped with automatic parallization), and on multi-processor Cray machines (in the late 1980's). Now, there are lots of commercial automatic parallelizers with David Kuck and his group at UICU being on of the leading academic organizations researching automatic parallelization. Q: What application areas are addressed in PAR93? A: The following areas are addressed: 101.tomcatv 2d mesh generation. This program is the author's modernized version(1991) compared to the SPECfp92 version (1987) of the original tomcatv. It is also scaled up in size. 102.swim The original SPEC program to compare performance of shallow water equations by Swartztrauber and Sato ahs been scaled up to arrays of 513x513 from 256x256. 103.su2cor The same quantum physics program using Monte-Carlo methods from Prof. Bunk that is found in SPECfp92. 104.hydro2d The same SPECfp92 program that solves the astrophysical Navier-Stokes equations. The problem is scaled up from 102x102 to 127x127. 105.tfft David Bailey's NASA FFT set. The FFT length is 4M elements. 106.fdmod Siamak Hassanzadeh's 2d finite difference scheme for 3d seismic wave propagation. This is part of the original ARCO Benchmark Suite. It was selected because this portion of seismic data processing consume over 80% of the computer cycles. 107.mgrid Eric Barszcz and Paul Fredrickson's multigrid solver from NASA Ames. 109.appbt Sisira Weeratunga's block ADI solver for nonlinear PDE's from NASA Ames, using block tridiagonal solvers. 111.appsp Sisira Weeratunga's block ADI solver for nonlinear PDE's from NASA Ames based on the solution of penta- diagonal systems of equations. 113.ora An update of the original SPEC 048.ora to prevent constant propagation by reading input. The problem size has been updated. ora is a ray tracing program. 117.nump Portions of Peter Montgomery's number theory program that parallelize. Q: I want to use profiling tools and beautiful GUI's to assist in parallization. Why does this suite not encourage their use and allow me to demonstrate them? A: On the contrary, this suite will get the attention of parallel computer vendors, and every GUI and tool will be applied to get better compilers out. Some users like GUIs and tools, but a greater majority prefer compilers. Furthermore, there could be future evolutions of PAR93 that address the more advanced issues of parallelization. Q: What is the reporting methodology? A: The principal is to report both raw performance and scaling of MP systems. To the first level, one configuration must be reported, for example the run times for 8 cpus applied to the PAR93 suite could be reported. To a second level, all of the performance for 1,2,4, and 8 ( or any sequence of cpus, with up to 5 configurations in a single report) can be reported. Scaling with the number of processors is displayed, but as with the present SPEC benchmark, the single figure of merit is the ratio to a reference system, in this case a Sun SPARCstation 10 Model 41 with 40 MHz SuperSPARC and 1MB external cache. The baseline uniprocessor values for the suite are: Benchmark Seconds 101.tomcatv 1171.5 102.swim 1220.6 103.su2cor 161.5 104.hydro2d 990.5 105.tfft 2628.4 106.fdmod 929.2 107.mgrid 630.6 109.appbt 2424.4 111.appsp 2476.2 113.ora 832.0 117.nump 1005.1 This takes less than 4 hours to run on a single cpu 40.33MHz SS10/41. Q: Some of the benchmarks are not full applications. Why not substitute full applications for these abstractions? A: Some, like SWIM, are abstractions from full programs, but issues of availability, code size, and test cases have prevented SPEC from using full applications in all cases. These same limitations apply to the PAR93 suite. As this suite matures, and SPEC members have their input and are able to contribute more to this suite, more full applications are expected. Q: Some codes like tomcatv, represent 2D problems when in reality, 3D mesh generation and 3d Fluid problems are more realistic. Why not use more modern programs. A: There is an effort to port a 3d mesh generator, but this will take time given its huge size. When complete, this could add to the quality of the suite. Consumers of PAR93 realize that this is the first generation, and that future generations aim to attain much higher levels of quality. Q: What about the duplication of similar numerical techniques such at swim and fdmod using explicit finite differences? A: Even though swim and fdmod have similar methods, they represent different application areas, and the methods and program setup are quite different. Compare this for example to SPECfp92 where mdljsp2 and mdljdp2 are simply the same program, but one is single precision and the other is double precision. The comparison of results from the two similar programs can provide insights not otherwise available. Q: Why not report two numbers for this suite? One obtained by autoparallelization and the other obtained by hand parallelization or the addition of directives? A: This is similar to the reporting rules of PERFECT and some of the PARKBENCH suite. The purpose here has been stated above. Reporting results in this manner could be a possible rule for a future suite. Q: What about adding codes that have subroutine and loop level parallelism that encourage the use of directives and having various granularities? A: This is a good suggestion for a future suite. Please direct responses to: Bodo Parady Michael Humphrey mikehu@sgi.com 415.390.1936 Bodo Parady | (415) 336-0388 SMCC, Sun Microsystems | Bodo.Parady@eng.sun.com Mail Stop MTV15-404 | Domain: bodo@cumbria.eng.sun.com 2550 Garcia Ave. | Alt: na.parady@na-net.ornl.gov Mountain View, CA 94043-1100 | FAX: (415) 336-4636 ----- End Included Message ----- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kerry@totara.cs.waikato.ac.nz (Kerry Guise) Subject: barrier synchronisation My first posting appears to have disappeared into the hypernet so try again ... I'm looking for some information, perhaps published papers, on the implementation of the barrier synchronisation mechanism in KSR's extensions to the pthreads library ( I think others may have implemented barrier synchronisation as well ; is this now part of the POSIX 1003.4a draft ?). Any pointers appreciated. With thanks, Kerry Guise Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 3 Nov 93 11:30:58 CST From: mwitten@chpc.utexas.edu Subject: URGENT: DEADLINE CHANGE FOR WORLD CONGRESS UPDATE ON DEADLINES FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE, PUBLIC HEALTH, AND BIOTECHNOLOGY 24-28 April 1994 Hyatt Regency Hotel Austin, Texas ----- (Feel Free To Cross Post This Announcement) ---- Due to a confusion in the electronic distribution of the congress announcement and deadlines, as well as incorrect deadlines appearing in a number of society newsletters and journals, we are extending the abstract submission deadline for this congress to 31 December 1993. We apologize to those who were confused over the differing deadline announcements and hope that this change will allow everyone to participate. For congress details: To contact the congress organizers for any reason use any of the following pathways: ELECTRONIC MAIL - compmed94@chpc.utexas.edu FAX (USA) - (512) 471-2445 PHONE (USA) - (512) 471-2472 GOPHER: log into the University of Texas System-CHPC select the Computational Medicine and Allied Health menu choice ANONYMOUS FTP: ftp.chpc.utexas.edu cd /pub/compmed94 (all documents and forms are stored here) POSTAL: Compmed 1994 University of Texas System CHPC Balcones Research Center 10100 Burnet Road, 1.154CMS Austin, Texas 78758-4497 SUBMISSION PROCEDURES: Authors must submit 5 copies of a single-page 50-100 word abstract clearly discussing the topic of their presentation. In addition, authors must clearly state their choice of poster, contributed paper, tutorial, exhibit, focused workshop or birds of a feather group along with a discussion of their presentation. Abstracts will be published as part of the preliminary conference material. To notify the congress organizing committee that you would like to participate and to be put on the congress mailing list, please fill out and return the form that follows this announcement. You may use any of the contact methods above. If you wish to organize a contributed paper session, tutorial session, focused workshop, or birds of a feather group, please contact the conference director at mwitten@chpc.utexas.edu . The abstract may be submitted electronically to compmed94@chpc.utexas.edu or by mail or fax. There is no official format. If you need further details, please contact me. Matthew Witten Congress Chair mwitten@chpc.utexas.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edwin sha Subject: CFP: Great Lakes Symposium on VLSI Sender: USENET News System Organization: University of Notre Dame, Notre Dame *************** CALL FOR PAPERS *************** FOURTH GREAT LAKES SYMPOSIUM ON VLSI (March 4-5, 1994, Notre Dame, Indiana, U.S.A) SPONSORED/SUPPORTED BY: IEEE Circuits & Systems Society IEEE Computer Society ACM SIGDA Department of Computer Science and Engineering, Univ. of Notre Dame We invite papers presenting original, unpublished and state-of-the-art research manuscripts in all aspects of VLSI Systems. The topics may include, but are not limited to : 1. Place and Route 6. Multichip Modules (MCMs) 2. Testability/BIST 7. Application-Specific Design 3. Synthesis/Verification 8. Analog & Mixed-Signal IC's 4. Field Programmable 9. Graph Theory Application Gate Arrays (FPGAs) to VLSI 5. Highly Parallel Arch. 10. Handcrafted Chips Submitted manuscripts should clearly state their contribution to the state-of-the-art. Both theoretical and experimental research results will be considered. A published proceedings will be available at the symposium. A special registration fee for students is planned. Student researchers are welcome to submit their work for review as regular papers. Also, a special student session is planned at this year's Symposium. Manuscripts for this special student session will not appear in the Symposium proceedings. However, these articles will be reviewed for topic suitability only. TECHNICAL PROGRAM: The technical program consists of regular and short presentations of papers and student poster sessions for two days. The symposium will start with a invited talk about the design experiences of the PowerPC project. SUBMISSIONS MUST INCLUDE: 1. Title of the paper. 2. Category 1-10 from the above. 3. Four copies of Extended abstract (about 1500 words ) 4. A 50-word abstract. 5. A cover page/letter stating all authors' names, addresses, office and FAX numbers, e-mail addresses and affiliations. One author must be designated as a contact for all future correspondence. Author names and affiliations must appear on this cover page ONLY. IMPORTANT DEADLINES: Submissions Due : November 10, 1993. Acceptance Notification : December 15, 1993. Camera-ready Paper Due : January 15, 1994. SEND PAPERS TO: Edwin Sha GLS-VLSI 94 Dept. of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556 For further information contact: Phone: (219)-631-8803. Fax: (219)-631-8007. Email: glsvlsi@cse.nd.edu TECHNICAL PROGRAM COMMITTEE: Co-Chairs: Edwin H.-M. Sha, University of Notre Dame Naveed Sherwani, Western Michigan University Members: Jacob Abraham University of Texas-Austin. Paul Ainslie Delco Electronics Jason Cong UCLA Randall Geiger Iowa State University Dwight Hill Synopsys S. Y. Kung Princeton University Kevyn Salsburg IBM D.F. Wong University of Texas-Austin ORGANIZING COMMITTEE Chair: John Uhran, Jr., CSE, Notre Dame Publicity: Steve Bass, CSE, Notre Dame INDUSTRIAL LIASONS Jeff Banker, ERIM Gerald Michael, Indiana Microelectronics Kenneth Wan, AMD, Inc. ***************** ***************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: chiangm@cps.msu.edu (Chiang Chi Ming) Subject: Ask for Help: locate communication benchmark programs Date: 3 Nov 1993 20:32:36 GMT Organization: Dept. of Computer Science, Michigan State University I am looking for benchmark programs which can measure the performance of network or any paper mentions about this kind of programs. Any information will be helpful. Thanks in advance for your help. --Chi-ming Chiang chiangm@cps.msu.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 3 Nov 93 15:38:01 PST From: ahicken@parasoft.com (Arthur Hicken) Subject: Hands-on Parallel/Distributed Programming Courses offered. ParaSoft Corporation, the leader in distributed and parallel computing tools, will conduct a hands on, introductory course on the theory and practice of distributed and parallel computing. This unique, hands-on focus of this course, 75% of the total time, assures that participants will gain a practical understanding of distributed computing applications. Each participant will program on a workstation linked to a network within the lab, to demonstrate and verify theoretical concepts presented in the seminar. Course Goals: Upon completion of the course, the participant will be able to: 1. Set up a simple job dispatcher with dynamic load balancing. 2. Build an application which runs on multiple platforms. 3. Implement process communication for tightly coupled applications. Course Content: 1. Theory - Introduction to parallel/distributed computing, programming models, programming environments. 2. Labs - Machine setup, Running parallel/distributed programs, basic parallel/distributed I/O, message passing, global operations, data decomposition, heterogeneous computing. Prerequisites: 1. Working knowledge of C or Fortran. 2. Familiarity with Unix 3. Strong desire to learn about distributed computing. Dates : Thursday, July 8 - Friday, July 9 Location : ParaSoft Offices - Pasadena, CA Instructors: Dr. Adam Kolawa - World wide expert and lecturer on distributed computing Arthur Hicken - Unix expert and invited participant at ACM and IEEE forums Lab Setup: Each participant will develop distributed applications at a workstation on a network within the lab. Cost: $495 - includes a complete set of tutorial materials and Express manuals. Lunches and the evening receptions are included. Cost can be credited toward purchase of Express, or toward application development services. Educational Discount: Only $200 for university personnel and students. Participation: Strictly limited to 15 people. Please call or send email to parasoft early to reserve your space. Applications are accepted on a first-come, first-serve basis. We will be glad to help you arrange travel and hotel accommodations. Additional Courses are available to graduates from the Level I course: Level II - 3 days. Covers parallel/distributed: debugging, graphics, performance monitoring, parallelization techniques, asynchronous programming, basic parallel/distributed application skeletons, etc. Level III - 3 days. Covers application of topics learned in the level I and II courses by applying these techniques on real applications. Special Evening Receptions - Get acquainted and discuss practical applications (Drinks and hors-d'oeuvres provided) A copy of the transparencies used in the course can be obtained from the ParaSoft anonymous ftp server at ftp.parasoft.com (192.55.86.17) in the /express/classes directory. For more information contact: ParaSoft Corporation 2500 E. Foothill Blvd. Pasadena, CA 91107-3464 voice: (818) 792-9941 fax : (818) 792-0819 email: info@parasoft.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 3 Nov 93 16:20:55 PST From: ahicken@parasoft.com (Arthur Hicken) Subject: Hands-on Parallel/Distributed Programming Courses offered. Oops, sorry, the dates in the previous post were wrong, as well as the place. Here's the correct info. -Arthur "Grabbed the wrong file" Hicken This course has been timed to take place after the end of the Cluster Workship at Florida State, so you can plan to attend both if you'd like. Dates : Friday, December 10 - Sunday, December 12 Location : Florida State University, Tallahassee, Florida Instructors: Dr. Adam Kolawa - World wide expert and lecturer on distributed computing Participation: Strictly limited to 15 people. Please call or send email to parasoft early to reserve your space. Applications are accepted on a first-come, first-serve basis. A copy of the transparencies used in the course can be obtained from the ParaSoft anonymous ftp server at ftp.parasoft.com (192.55.86.17) in the /express/classes directory. For more information contact: ParaSoft Corporation 2500 E. Foothill Blvd. Pasadena, CA 91107-3464 voice: (818) 792-9941 fax : (818) 792-0819 email: info@parasoft.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on TMC CM-5, Intel iPSC/860, Intel Paragon Organization: NAS/NASA-Ames Research Center J. Eric Townsend (jet@nas.nasa.gov) last updated: 3 Nov 1993 (corrected admin/managers list info) This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-managers -- discussion of administrating the TMC CM-5 cm5-users -- " " using the TMC CM-5 ipsc-managers -- " " administrating the Intel iPSC/860 ipsc-users -- " " using the Intel iPSC/860 paragon-managers -- " " administrating the Intel Paragon paragon-users -- " " using the Intel Paragon The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@boxer.nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@boxer.nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@boxer.nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: andreas@nlp.physik.th-darmstadt.de (Andreas Billo) Subject: [Q] Problem with parallel threads with 3L-C v 2.1.1 Organization: Institut fuer Angewandte Physik, TH-Darmstadt, Germany Reply-To: andreas@nlp.physik.th-darmstadt.de we have the following problem: The program is starting four parallel threads. The thread RecS blocks the other threads in the following condition, if the for-loop contains a lot of program code. Although the code is not executed. If we insert a break condition in the for-loop, the other threads are executed after finishing the for-loop. Thanks for any ideas and hints. andreas main() { StartAdr2=malloc( 300000 ); thread_start( FMain1, StartAdr2, 10000, ... ); thread_start( FMain2, StartAdr2+10000, 10000, ... ); thread_start( FMain3, StartAdr2+20000, 10000, ... ); } void FMain1( StartAdr ) char *StartAdr; { ... thread_start( .. ) thread_start( .. ) thread_start( RecS, StartAdr2+50000, 10000, ... ) } void FMain2( StartAdr ) char *StartAdr; { ... par_print("..."); } void FMain3( StartAdr ) char *StartAdr; { ... par_print("..."); } void RecS( ... ) { ... for(;;){ if (1 == 2) { code}; }; } --- --------------------------------------------------------- | | | Name: Andreas Billo | | Organization: Institut fuer Angewandte Physik | | Nichtlineare Physik | | TH Darmstadt | | Address: Schlossgartenstr. 7 | | City: 64289 Darmstadt | | Country: Germany | | Phone: +49 - +6151 - 164086 | | Fax: +49 - +6151 - 164534 | | Internet: andreas@nlp.physik.th-darmstadt.de | | | --------------------------------------------------------- IBM: Iesus Babbage Mundi, Iesum Binarium Magnificamur. AMDG: Ad Maiorem Dei Gloriam? Von wegen Ars Magna, Digitale Gaudium! IHS: Iesus Hardware & Software! Casaubon Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Need Replacement Moderator for Comp.Parallel I've been the moderator of this newsgroup since 1987. I think it's about time for me to move on. My professional duties are catching up with me. Therefore, I'd like someone to volunteer to be moderator. The job is not hard---it takes (on most days) much less than an hour a day. Mondays are the worst and that takes about 15-30 minutes to get everything posted. Moderating is very important. It keeps the tone professional and it supports a thirty-five member mailing list. We have readers who do not have net access all over Asia, Europe, and the Americas---the moderating sets the tone for services and discussions. If you're SERIOUSLY interested, contact me. If no one offers, the group will become unmoderated at the end of the year. It's been fun and informative. Thanks for you patience. Steve =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbetz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: clark@bruce.nist.gov Subject: NIST SBIR request for proposal Sender: news@dove.nist.gov Organization: NIST Date: Wed, 3 Nov 1993 13:59:34 GMT Apparently-To: comp-parallel@uunet.uu.net The National Institute of Standards and Technology has issued a request for proposals to be funded by its Small Business Innovation Research (SBIR) program. Readers of this newsgroup may be particularly interested in a requests for proposal in the Physics topical area: 8.7.5 Schroedinger Equation Algorithms for MIMD Architectures NIST programs in laser-atom interaction utilize models that require the numerical solution of a many-particle, time-dependent Schroedinger equation. We are interested in algorithms for the solution of this equation that can take advantage of computational parallelism, particularly multiple-instruction, multiple-data (MIMD) architectures. We require a set of computational modules that can solve the initial-value problem for the Schroedinger equation on a multidimensional spatial grid. Such modules should be written in the Fortran or C languages, and use PVM for interprocessor communication so that they can be executed on a network of heterogeneous computers. They should provide standard interfaces for visulaization, e.g. calls to AVS, PV-WAVE, or Display PostScript. Preference will be given to proposals that optimize fast fourier transform techniques for MIMD architectures, or which also provide for the solution of large eigenvalue problems. Further information on the SBIR program may be obtained from Mr. Norman Taylor A343 Physics Building National Institute of Standards and Technology Gaithersburg, MD 20899 (301)975-4517 The deadline for receipt of proposals is December 1, 1993. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: torsten@news.dcs.warwick.ac.uk (Torsten Scheel) Subject: Voronoi algorithm Message-ID: <1993Nov3.174014.18261@dcs.warwick.ac.uk> Originator: torsten@slate Sender: news@dcs.warwick.ac.uk (Network News) Nntp-Posting-Host: slate Organization: Department of Computer Science, Warwick University, England Date: Wed, 3 Nov 1993 17:40:14 GMT Hi ! I'm looking for SIMD algorithms for computing Voronoi diagram of a point set in Euclidian space. I try to implement one algorithm using the language Parallaxis. Has anybody experiences with this problem ? Thanks, Torsten (Please excuse my english language, it is surely faulty) P.S.: Please mail me directly. Thank you ! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: Andre.Seznec@irisa.fr (Seznec Andre) Newsgroups: comp.arch,comp.parallel Subject: Reference pointer Date: 4 Nov 1993 10:57:34 GMT Organization: Irisa, Rennes(FR) I am needing a pointer to the following paper: "A flexible interleaved memory design for generalized low conflict memory access", by L.S. Kaplan Thanks Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dka@dcs.ed.ac.uk Date: Thu, 4 Nov 93 11:42:45 GMT Subject: CFP: ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation ============================================= Call For Papers ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation University of Edinburgh, Scotland, U.K. July 6-8 1994, ============================================= Sponsors: ACM Special Interest Group on Simulation (SIGSIM), IEEE Computer Society Technical Committee on Simulation (IEEE-TCSIM), and Society for Computer Simulation (SCS) Topics: PADS provides a forum for presenting recent results in the simulation of large and complex systems by exploiting concurrency. The scope of the conference includes, but is not limited to: * Algorithms, and methods for concurrent simulation (e.g. optimistic, conservative, discrete, continuous, event-driven, oblivious) * Programming paradigms for concurrent simulation (e.g. object-oriented, logic, functional) * Models of concurrent simulation (e.g. stochastic, process algebraic, temporal logic) * Performance evaluation (both theoretical and experimental) of concurrent simulation systems * Special purpose concurrent simulation (e.g. multiprocessor architectures, distributed systems, telecommunication networks, VLSI circuits, cache simulations) * Relationship of concurrent simulation and underlying architecture (e.g. SIMD and MIMD machines, geographically distributed computers, tightly-coupled multiprocessors) Schedule: Deadline for Paper submission : December 1, 1993 Notification of acceptance : March 1, 1994 Camera ready copy due by : April 15, 1994. Invited Speaker : LEONARD KLEINROCK (Los Angeles, USA) General Chair : Rajive Bagrodia (Los Angeles, USA) Local Arrangements: Monika Lekuse (Edinburgh, U.K.) Program Co-chairs D. K. Arvind Jason Yi-Bing Lin Department of Computer Science, Bellcore, University of Edinburgh, MRE 2D-297 Mayfield Road, 445 South Street Edinburgh EH9 3JZ, U.K. Morristown, NJ 07962, USA. dka@dcs.ed.ac.uk liny@thumper.bellcore.com Voice: +44 31 650 5176 Voice: +1 (201) 829-5095 Fax: +44 31 667 7209 Fax: +1 (201) 829-5886 Program Committee I. Akyildiz (Atlanta, USA) A. Greenberg (Bell Laboratory, USA) R. Ayani (Kista, Sweden) P. Heidelberger (IBM, USA) F. Baiardi (Pisa, Italy) C. Lengauer (Passau, Germany) M. Bailey* (Tucson, USA) D. Nicol* (Williamsburg, USA) S. Balsamo (Pisa, Italy) T. Ott (Bellcore, USA) H. Bauer (Munich, Germany) B. Preiss (Waterloo, Canada) R. Fujimoto* (Atlanta, USA) S. Turner (Exeter, UK) * Member of the Steering Committee\\ Send e-mail to D. K. Arvind (dka@dcs.ed.ac.uk) for inclusion in the PADS electronic mailing list. Submissions: Prospective authors should submit six copies of the paper written in English and not exceeding 5000 words to either one of the Program Co-chairs. Papers must be original and not submitted for publication elsewhere. Each submission should include the following in a cover sheet: short abstract, contact person for correspondence, postal and e-mail addresses. To ensure blind reviewing, authors' names and affiliations should appear only on the cover sheet. Bibliographic references should be modified so as not to compromise the authors' identity. Papers submitted by electronic mail will not be considered. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: shollen@kiwi.cs.wright.edu (Sheila Hollenbaugh) Subject: Help finding vendors for parallel systems Date: 4 Nov 1993 12:54:53 GMT Organization: Wright State University, Dayton,OH Some of our faculty have expressed an interest in obtaining a parallel system such as an nCUBE or MasPar. If anyone could provide me with information about vendors of such beasts I would be most grateful. We are in the Dayton, Ohio area, but manufacturers' addresses and phone numbers anywhere in the US would be great. --Sheila -------------------------------- Sheila Hollenbaugh Sr. Computer Systems Engineer Wright State University College of Engineering & Computer Science Dayton, OH 45435 Voice: (513) 873-5077 FAX: (513) 873-5009 shollen@cs.wright.edu or shollen@valhalla.cs.wright.edu Sheila Hollenbaugh Sr. Computer Systems Engineer Wright State University College of Engineering & Computer Science Dayton, OH 45435 Voice: (513) 873-5077 FAX: (513) 873-5009 shollen@cs.wright.edu or shollen@valhalla.cs.wright.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: shollen@kiwi.cs.wright.edu (Sheila Hollenbaugh) Subject: Help finding vendors for parallel systems Date: 4 Nov 1993 12:54:53 GMT Organization: Wright State University, Dayton,OH Some of our faculty have expressed an interest in obtaining a parallel system such as an nCUBE or MasPar. If anyone could provide me with information about vendors of such beasts I would be most grateful. We are in the Dayton, Ohio area, but manufacturers' addresses and phone numbers anywhere in the US would be great. --Sheila -------------------------------- Sheila Hollenbaugh Sr. Computer Systems Engineer Wright State University College of Engineering & Computer Science Dayton, OH 45435 Voice: (513) 873-5077 FAX: (513) 873-5009 shollen@cs.wright.edu or shollen@valhalla.cs.wright.edu Sheila Hollenbaugh Sr. Computer Systems Engineer Wright State University College of Engineering & Computer Science Dayton, OH 45435 Voice: (513) 873-5077 FAX: (513) 873-5009 shollen@cs.wright.edu or shollen@valhalla.cs.wright.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: suja@cse.uta.edu (Elizabeth Suja) Subject: Benchmarks Organization: Computer Science Engineering at the University of Texas at Arlington I was looking for some benchmark programs like gcc, expresso, spice, etc to test a cache simulator I am working on. I would be grateful if anyone could tell me where to obtain these from. Please respond to: suja@cse.uts.edu Thanks in advance. Elizabeth Suja Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: kamratha@rosebud.sdsc.edu (Anke Kamrath) Newsgroups: comp.sys.super,comp.unix.cray,comp.parallel Subject: Cray User Group Meeting - S'94, First Announcement Organization: San Diego Supercomputer Center ****************************************************** ****************************************************** ****************************************************** *** FIRST ANNOUNCMENT *** *** *** *** CRAY USER GROUP *** *** Spring '94 Conference *** *** *** *** March 14-18, 1994 *** *** San Diego, California *** *** *** *** LOCAL ARRANGEMENTS *** ****************************************************** ****************************************************** ****************************************************** Dear Colleague, The San Diego Supercomputer Center (with assistance from The Scripps Research Institute, Cray Research, Inc., and Cray Superserver Research, Inc.) is pleased to invite you to attend the Cray User Group Conference in San Diego, California, March 14-18, 1994. The theme of this year9s conference is Unparalleled Horizons. To help you with arrangements, this brochure contains information about: Conference registration How to submit abstracts for conference papers and posters San Diego area Conference hotel and how to make reservations Spring is a beautiful time of year in San Diego. Whether you like the ocean, the mountains, or the desert, all are nearby. So join us in San Diego, and expect a rewarding experience personally as well as professionally. You will receive the final local arrangements brochure with details of the technical programs in January. If you have any questions, please contact us. We9ll see you in March. Sincerely, Anke Kamrath Local Arrangements Chairperson San Diego CUG San Diego Supercomputer Center LOCATION: Princess Hotel Convention Center, San Diego, California IMPORTANT DATES Conference: March 14-18, 1994 Early Conference Registration Deadline: January 15, 1994 Hotel Registration Deadline: February 10, 1994 Papers and Posters Deadline: December 10, 1993 Late Conference Registration Deadline: February 18, 1994 Last Day to Cancel: February 18, 1994 SPONSORED BY THE SAN DIEGO SUPERCOMPUTER CENTER A National Laboratory for Computational Science and Engineering To the members of CUG: It is my pleasure to cordially invite you to attend the 33rd Cray User Group meeting in San Diego, California, March 14-18, 1994. This years theme "Unparalleled Horizons" will challenge us all as we look ahead to unequaled progress in high-performance computing technology. Calls for Papers and Posters for the Conference are included at the back of this brochure. The deadline for abstracts and poster descriptions is December 10, 1993. Papers are presented in 30-minute time slots as a part of a topical formal session. The poster session is an opportunity to present your work in an informal, unstructured setting using a visual representation (a poster) rather than a paper. There will be an award for the best poster presentation. The Program Committee is planning a stimulating agenda. In addition to the successful Monday morning tutorials, the first day of the Conference will include a "What is CUG" discussion and an Advisory Council meeting. Tuesday morning, Astronaut Sally Ride will be Keynote Speaker. General sessions and special interest tracks will be held each day. The Conference concludes Friday at noon. Birds-Of-a-Feather (BOF) sessions will be scheduled to bring together those people interested in discussing specific issues. If you wish to organize a BOF at the Conference, please contact me. A room for BOFs will be available. We welcome you and hope to see you in San Diego. Jean Shuler Program Chairperson CUG Vice President Lawrence Livermore National Laboratory ** CONFERENCE INFORMATION ** LOCATION: San Diego Princess Hotel Convention Center 1404 West Vacation Road San Diego, CA 92109 Conference Office For general arrangements, conference registration, payment, and technical questions: Until March 11, 1994 San Diego Supercomputer Center CUG San Diego P.O.Box 85608 San Diego, CA 92186-9784 Phone: 1-619-534-8333 Fax: 1-619-534-5152 TDD : 1-619-534-5001 E-mail: sdcug@sdsc.edu >From March 13-18 San Diego Princess Hotel Convention Center Phone: 1-619-274-7141 Fax: 1-619-581-5908 PROGRAM, PAPERS, AND POSTERS A preliminary conference program will be included in the January mailing, and the technical program will be finalized shortly before the conference. To submit a paper or poster, return the Call for Papers and Call for Posters forms at the end of this brochure as directed. All questions regarding the program should be directed to the Program Chairperson. REGISTRATION Register as soon as possible to help the Local Arrangements Committee plan a well-organized conference. All conference registrations postmarked on or before January 15, 1994 are eligible for the reduced registration fee. We encourage all attendees to take advantage of this reduced fee. All registrations, with payment, must be postmarked on or before February 18th, 1994, using airmail, first-class postage. Confirmation will be sent after payment is received. CANCELLATION Cancellation requests received by the CUG office on or before February 18, 1994 will be refunded in full. Cancellations received after February 18 will not be refunded. REGISTRATION DESK Pick up your registration materials and a conference badge at the Registration Desk in the Princess Convention Center in the Princess Ballroom foyer. You must wear the conference badge to participate in CUG sessions and events. CONFERENCE OFFICE HOURS Sunday, March 13 4:00 pm to 7:00 pm Monday, March 14 8:00 am to 7:00 pm Tuesday, March 15 8:00 am to 6:00 pm Wednesday, March 16 8:00 am to 5:30 pm Thursday, March 17 8:00 am to 5:00 pm Friday, March 18 8:00 am to 12:30 am ** FACILITIES *** MESSAGES The conference office at the Princess Hotel will be staffed throughout the conference to assist with special requirements. Incoming telephone calls may be directed to the office at 1-619-274-7141. Messages will be posted near the registration area. E-MAIL AND PERSONAL COMPUTERS VT100 compatible terminals will be available to allow participants to send and receive e-mail and use the Internet. A Macintosh with Persuasion, MS Word, MS Excel, and MacDraw II and a printer will be available. COPYING A copy machine will be available for making limited copies. If you have handouts or documentation to distribute at the conference, please bring a sufficient number of copies with you. DINING SERVICES Refreshment will be available during breaks throughout the conference. Breakfast will be served Tuesday through Friday, and lunch Tuesday through Thursday. Food and drinks will be served at each of the social events. Special dietary requirements should be specified on the conference registration form. There are restaurants and bars located on the hotel grounds. If you are interested in sampling some of San Diego's excellent Mexican and Californian cuisine, ask the Concierge for recommendations. SPECIAL NEEDS The Princess Hotel is in compliance with the American Disabilities Act (ADA). If you have any special needs, please indicate them on the conference registration form. ** HOTEL INFORMATION ** The conference hotel is the San Diego Princess Hotel and Conference Center, located in the center of San Diego on a forty-four acre island in Mission Bay. The accommodations consist of bungalows nestled among beautiful lagoons and lush tropical landscaping. A block of rooms at a special conference rate has been reserved (including a limited number at a government rate). The conference room rates are only available for reservations made before February 10, 1994. We recommend that you stay at the Princess, as there are no other hotels within walking distance. If the room block is filled, the hotel will attempt to locate you at another nearby hotel. Room rates at other hotels cannot be guaranteed. Given the likelihood that attendance at the conference will be large, we recommend you reserve your room as soon as possible. For your convenience, use the Hotel Registration form at the end of this brochure. ** TRANSPORTATION ** San Diego is a large city, and public transportation is not very practical to get around the area. If you plan to explore San Diego or sightsee during your stay, we recommend you rent a car. The San Diego International Airport (Lindbergh Field) is within 10 minutes drive of the Princess Hotel. SUPERSHUTTLE OR TAXI If you do not rent a car at the airport, we recommend you use SuperShuttle to get to the Princess Hotel from the airport. The Shuttle cost is $6.00 each way. To arrange for the shuttle: Find the hotel courtesy phoneboard located in the baggage claim area. Press #69 and the operator will direct you to the nearby SuperShuttle loading area. Or call 278-5700 from any public telephone. When returning to the airport, make advance reservations for SuperShuttle. Ask the Concierge or front desk to book your return reservation, or call 278-8877 to make a reservation yourself. Taxis are available immediately outside the baggage claim area at the airport. The taxi fare to the Princess is about $10.00. DIRECTIONS FROM AIRPORT If you are driving from the airport, take Harbor Drive South (towards downtown San Diego) to Grape Street. At the light turn left. Follow the signs to Interstate 5 North. Take Interstate 5 North to Sea World Drive. At the top of the offramp at the light, turn left and go over the bridge onto Sea World Drive. Follow Sea World Drive to West Mission Bay Drive/Ingraham Street. Veer to the right. Stay on Ingraham until you reach West Vacation Road, where you will see the sign for the San Diego Princess Hotel. Turn left into the hotel grounds. FROM LOS ANGELES If you are driving from the Los Angeles area, take Interstate 5 South to Sea World Drive. At the top of the offramp at the light turn right. Follow Sea World Drive to West Mission Bay Drive/Ingraham Street. Veer to the right. Stay on Ingraham until you reach West Vacation Road, where you will see the sign for the San Diego Princess Hotel. Turn left into the hotel grounds. TRAVEL TO MEXICO Persons with U.S. citizenship may freely visit nearby Mexico (17 miles from downtown) with only normal identification. If you are not a U.S. citizen, you9ll need to carry your passport and have a multiple entry visa for the U.S. to visit Mexico. If you leave the U.S. with a single entry visa, you will not be able to return to the U.S. from Mexico; this is considered a second entry. ** SOCIAL EVENTS ** CRI RECEPTION All participants and guests are invited to a Monday evening reception sponsored by CRI. Newcomer9s Reception All new CUG member sites and first-time CUG attendees are invited to a reception Tuesday evening on the Governor9s Lawn. NIGHT-OUT The traditional CUG Night Out on Wednesday (6:00 until 10:15) is a cruise on San Diego Bay with dinner, music, and spectacular views of San Diego's downtown skyline and Coronado Bridge. The Night Out is included with registration, but additional tickets for guests must be purchased separately. Register and purchase any additional tickets as early as possible to guarantee space for guests. GUEST PROGRAMS, TOURS, AND OTHER ACTIVITIES If you plan to extend your stay and vacation in San Diego, the Princess Hotel offers many activities, including Botanical walks around the island 18-hole golf course Fitness Center Jogging and par course Bicycles and Quadracycles Tennis courts Swimming pools and whirlpools Shuffleboard Croquet course Wind surfing, sailing, and power and paddle boats Some of these activities are free, and others are provided at an additional cost. Ask the Concierge for information about their fitness programs and recreation pass. The Princess Hotel Concierge staff is also available to assist you with dinner reservations, directions, and tours. Discounted tickets for the San Diego Zoo, Wild Animal Park, and Sea World can be purchased from the Concierge Desk on the day you visit each attraction. The staff can also arrange fishing, scuba diving, water skiing, or golf at one of several championship courses located within 30 minutes of the hotel. Shopping opportunities in the area include Seaport Village on San Diego Bay, Horton Plaza in downtown San Diego, and Old Town. You will need transportation to and from these areas, so plan to rent a car or hire a taxi. CLIMATE/CLOTHING San Diego has mild temperatures all year round. You rarely need a topcoat or raincoat. Evenings can be cool, so bring a sweater or jacket. The temperatures in March can be warm, so shorts and swimwear may be desirable. Average temperatures range from 50-66 F (or 10-19 C). Most San Diego restaurants welcome casual attire. ** REGISTRATION INFORMATION ** REGISTRATION Complete the registration form (next page) and mail or fax it to CUG San Diego. Conference fees are due with your registration. All payment must be in U.S. dollars from checks drawn on U.S. banks or by electronic funds transfer. Credit cards or invoices are not accepted. PAYMENT BY CHECK Make checks payable to "CUG San Diego". Indicate your CUG site code on your check, and send it with your registration form. Be sure all currency conversions and transmission costs have been paid by your installation site. PAYMENT BY ELECTRONIC FUNDS TRANSFER You may pay conference fees by transferring the appropriate amount (increased by $8.50 to cover the transfer fee) to: CUG San Diego Bank of America La Jolla Plaza Branch # 1102 4380 La Jolla Village / 100 San Diego, CA 92122 Account # 11027 04167 Routing # 121000358 Be sure to include your name and site on the order. Send a copy of the transfer order with your registration form. ADDRESSES Local Arrangements: Chairperson: Anke Kamrath Coordinator: Ange Mason CUG San Diego San Diego Supercomputer Center P.O.Box 85608 San Diego, CA 92186-9784 USA Phone: 1-619-534-8333 Fax: 1-619-534-5152 TDD: 1-619-534-5001 E-Mail: sdcug@sdsc.edu Program: Chairperson: Jean Shuler National Energy Research Supercomputer Center (NERSC) P.O.Box 5509 L-561 Lawrence Livermore National Laboratory Livermore, CA 94551 USA Phone: 1-510-423-1909 Fax: 1-510-422-0435 E-Mail: shuler@nersc.gov ** CONFERENCE REGISTRATION FORM ** Early Registration Deadline: January 15, 1994 Late Registration Deadline: February 18, 1994 Please type or block print separate registration forms for each Conference attendee. Mail or fax the registration with a check or a copy of a funds transfer order to the following address: CUG San Diego, Ange Mason San Diego Supercomputer Center P.O. Box 85608 San Diego, CA 92186-9784 USA Phone: 1-619-534-8333, Fax: 1-619-534-5152 TDD: 1-619-534-5001 __________________________________________________________________________ Full Name (Last, First) __________________________________________________________________________ Organization Name CUG Site Code (Mandatory) __________________________________________________________________________ Department* Mail Stop __________________________________________________________________________ Signature of Installation Delegate CUG Site Code (If not employed by member site or CUG) __________________________________________________________________________ Address __________________________________________________________________________ City State/Province Postal/Zip Code Country __________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number) __________________________________________________________________________ Electronic Mail Address __________________________________________________________________________ Guest Name __________________________________________________________________________ Special Dietary Requirement (Please Specify) __________________________________________________________________________ Other Special Needs (Please Specify) __________________________________________________________________________ Emergency Contact and Phone Number ___ Check here to have your name/address omitted from the official CUG Proceedings ___ Check here if you are a new member or first-time attendee Early Registration (before January 15, 1994) $550 Late Registration (between Jan.15-Feb.18,1994) $600 A. Registration fee: $ ______ Additional copies of Proceedings $ 30 B. Additional Proceedings cost (quantity ___): $ ______ Additional guest tickets for Night Out $100 C. Additional guest ticket cost (number: ___): $ ______ D. For electronic funds transfer fee, add $8.50: $ ______ E. Payment due, in U.S. funds only (A + B + C + D): $ ______ ** CALL FOR PAPERS *** Deadline: December 10, 1993 Please type or block print the information requested on this form. Mail or fax to the Program Chair: Jean Shuler National Energy Research Supercomputer Center P.O. Box 5509 L-561 Lawrence Livermore National Laboratory Livermore, CA 94551 USA Phone: 1-510-423-1909 Fax: 1-510-422-0435 E-mail: shuler@nersc.gov ____________________________________________________________________________ NAME ____________________________________________________________________________ ORGANIZATION CUG SITE CODE (MANDATORY) ____________________________________________________________________________ DEPARTMENT MAIL STOP ____________________________________________________________________________ ADDRESS ____________________________________________________________________________ CITY STATE/PROVINCE POSTAL/ZIP CODE COUNTRY ____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): ____________________________________________________________________________ ELECTRONIC MAIL ADDRESS ____________________________________________________________________________ TITLE OF PAPER ABSTRACT (TWO OR THREE SENTENCES) AUDIO/VISUAL REQUIREMENTS Video 1/2": VHS/ NTSC, PAL, SECAM Video 3/4" U-matic/NTSC 35 mm slide projector Overhead Projector Other (specify): ______________________________________ SESSION IN WHICH YOU WISH TO PRESENT General Sessions Operating Systems Applications Operations Networking Performance Management Software Tools Graphics Mass Storage Systems User Services ** CALL FOR POSTERS ** Deadline: December 10, 1993 Please type or block print the information requested on this form. Mail or fax to: Larry Eversole JPL/Cal Tech MS 301-455 4800 Oak Grove Drive Pasadena, CA 91109 USA Fax: 1-818-393-1187 E-mail: eversole@voyager.jpl.nasa.gov ____________________________________________________________________________ NAME ____________________________________________________________________________ ORGANIZATION CUG SITE CODE (MANDATORY) ____________________________________________________________________________ DEPARTMENT MAIL STOP ____________________________________________________________________________ ADDRESS ____________________________________________________________________________ CITY STATE/PROVINCE POSTAL/ZIP CODE COUNTRY ____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): ____________________________________________________________________________ ELECTRONIC MAIL ADDRESS ____________________________________________________________________________ TITLE OF POSTER _____________________________________________________________________________ SHORT DESCRIPTION ROUGH MOCK-UP OF THE POSTER ** HOTEL REGISTRATION ** CRAY USER GROUP Meeting, March 14-18,1994 Deadline: February 10, 1994 Please type or block print the information requested. Mail or fax it to the following address: San Diego Princess Hotel Accommodation Reservation Request 1404 West Vacation Road San Diego, California 92109-7994 USA Phone: 1-619-274-4630 TDD: 1-619-274-4630 FAX: 1-619-581-5929 (Allow 24 hours for confirmation) _____________________________________________________________________________ Last Name First Name _____________________________________________________________________________ Mailing Address _____________________________________________________________________________ City State/Province Postal/ZIP Code Country _____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): _____________________________________________________________________________ Date of Arrival Estimated time of arrival Date of Departure Standard room rate $120 Government room rate* $79 (Government ID required) Any additional persons $15 (Number _________ ) 1 Bed ___ 2 Beds ___ Smoking Room ___ Non-Smoking Room ____ All rates subject to tax. Current tax is 9%. * A limited number of government-rate rooms is available. If you qualify for a government rate room please make your reservations as early as possible to ensure a room at this rate. Required: ONE NIGHT ROOM DEPOSIT INCLUDING TAX (A) Enclose a check in the amount of one night's lodging, (plus 9% tax), or (B) Complete the credit card information, and your account will be charged in the amount of one night9s lodging, plus 9% tax, upon receipt by the hotel. This deposit guarantees first night availability and will be credited to the last night of your stay. An early check-out will forfeit deposit. Be sure arrival and departure dates are correct. (A) Amount of enclosed check $ _________________________ (B) Amount charged to credit card $ _________________________ VISA MASTER CARD Card number _______________________________ Cardholder Name ______________________________ _____________________________________________________________________________ SIGNATURE CARD EXPIRATION DATE o Hotel check-in time and guestroom availability is 4:00 pm. o Check-out time is 12:00 noon. For reservations made after February 10, 1994, or after the group block is filled, the hotel will extend the group rate based upon availability. If the Princess is sold out, the hotel will refer you to a nearby hotel. ___________________________________________________________________ SIGNATURE DATE Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dongarra@cs.utk.edu Subject: Templates for the Solution of Linear Systems We have just completed a book on iterative methods. The book is primarily aimed at computational scientists who are not specialists in computational linear algebra and would like to incorporate state-of-the-art computational methods for solving large sparse non-symmetric systems of linear equations. The title of the book is ``Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods'', is authored by Richard Barrett, Mike Berry, Tony Chan, Jim Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Chuck Romine and Henk van der Vorst, is being published by SIAM, and will be available in bound form in mid-November. SIAM has set the price for the Templates book at List Price $18.00 / SIAM Member Price $14.40. We are putting the royalties from the book into a SIAM fund to help support students attend SIAM meetings. The book contains: o) Mathematical descriptions of the flow of the iterations. o) Algorithms described in Fortran-77 and MATLAB. o) Discussion of convergence and stopping criteria. o) Suggestions for extending each method to more specific matrix types (for example, banded systems). o) Suggestions for tuning (for example, which preconditioners are applicable and which are not). o) Performance: when to use a method and why. SIAM is trying an experiment with this book and has allowed the postscript file containing the book to be distributed freely over the internet. It is available from netlib. To retrieve the postscript file you can use one of the following methods: 1) anonymous ftp to netlib2.cs.utk.edu cd linalg get templates.ps quit 2) from any machine on the Internet type: rcp anon@netlib2.cs.utk.edu:linalg/templates.ps templates.ps 3) send email to netlib@ornl.gov and in the message type: send templates.ps from linalg 4) use Xnetlib and click "library", click "linalg", click "linalg/templates.ps", click "download", click "Get Files Now". (Xnetlib is an X-window interface to the netlib software based on a client-server model. The software can be found in netlib, ``send index from xnetlib''). The algorithm descriptions in Fortran and MATLAB can be found in netlib under the directory linalg. Jack Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Thomas.Stricker@cs.cmu.edu Subject: FTP site for iWarp information... Date: Thu, 4 Nov 1993 14:55:23 -0500 Organization: Carnegie Mellon, Pittsburgh, PA A while ago I posted the FTP site for the publications relating to the Carnegie Mellon/Intel SSD built iWarp parallel computers. To make the announcement short: Decommisioned old ftp site: puffin.warp.cs.cmu.edu, 128.2.206.42 New ftp site: skylark.warp.cs.cmu.edu, 128.2.222.122 User: anonymous... no password Content: several publications relating to iWarp and Fortran FX. Given that there are not too many of these iWarp machines around (1000-2000 nodes total) I thought that the archive is no longer used. But oh well - less than five days after dedicating puffin.warp.cs.cmu.edu to something else. I started getting complaints. So please take a note about that change. Tom PS: iWarp is a building block for parallel computers. It is similar to transputers in its concepts except that it always delivered 20MFlops + 20MIPS computation speed and 320MBytes/sec communication bandwidth per node, consisting of a single integrated chip and memory. iWarps are typically 2D toruses with 16-256 nodes. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zhang@ringer.cs.utsa.edu (Xiaodong Zhang) Subject: CS faculty positions Organization: University of Texas at San Antonio UNIVERSITY OF TEXAS AT SAN ANTONIO Faculty Positions in Computer Science Applications are invited for two tenure-track faculty positions in computer science at the assistant professor level. Candidates must have a strong backgrounds in high-performance compiler technology, software engineering with emphasis in parallel system evaluation and modeling. Applicants must have a PhD prior to September 1, 1994 in computer science or in a related area, and must demonstrate strong potential for excellence in research and teaching. Responsibilities include research, teaching, direction of graduate students and program development. Salaries for the positions are competitive. Applicants should submit a letter of application, a resume, and the names, addresses and phone numbers of at least three references to: Professor Robert Hiromoto Chairman of Computer Science Faculty Search University of Texas at San Antonio San Antonio, TX 78249 Preliminary screening will begin on February 1, 1994. The closing date for these positions is March 1, 1994. Questions and inquiries can be made by e-mail: cs@ringer.cs.utsa.edu, however applications and reference letters should be sent by post. UTSA is an Equal Opportunity/Affirmative Action Employer. Women and minorities are encouraged to apply. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: "Jeffrey M. Bloom" Subject: Need Slalom 1.0 C source code Organization: University of Virginia Computer Science Department If anybody has a copy of the C source code for the original implementation of the Slalom benchmark, I would really appreciate it if you could email it to me. That would be the implementation which uses a cholesky decomposition to factor the matrix. Thanks, Jeff Bloom jmb6g@virginia Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kjcooper@gandalf.rutgers.edu (Kenneth Cooper) Newsgroups: comp.parallel Subject: Tuple space, What is it ???? Keywords: Tulpe Date: 4 Nov 93 21:21:29 GMT Organization: Rutgers Univ., New Brunswick, N.J. Can anyone tell me what Tuple space is ??? Kenny Cooper Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wang-fang@CS.YALE.EDU (Fang Wang) Subject: Need information about vector compiler/language Date: 4 Nov 1993 23:03:05 -0500 Organization: Yale University Computer Science Dept., New Haven, CT 06520 Hi, I need to find any information about vectorizing compilers and vector constructs in languages for using the vector units or vector machines. Can anyone help me to find the relevant references? I'd greatly appreciate if someone can tell me any info about Cray's vector language and compiler for its vector machine, as well. Many thanks in advance. --Fang Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: m751804@tuan.hpc.citri.EDU.AU (Tim Kiddle) Subject: Supercomputer benchmarks Summary: Dongarra benchmark results - where? Keywords: benchmark supercomputer Organization: Collaborative Information Technology Research Institute Can anyone tell me if the Jack Dongarra benchmark results are available on the Internet, and if so, where? Thanks Tim Kiddle Bureau of Meteorology, Australia Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kohut1@urz.unibas.ch Subject: Teraflop and Dataflow Computers ? Organization: University of Basel, Switzerland Recently I read an old magazine about supercomputing (it was published in 1990 or so). There was an article about Teraflop computers which should be commercialiy available in 1993. Two approaches were mentioned : 1) a new version of SIMD type Connection Machine and 2) an MIMD type European Computer build of 64k Inmos T9000 Transputers. I never heard about any success (or failures) of these projects. So : are somewehre any Teraflop computers around yet ? My second question concernes Dataflow computers. In some literature somebody wrote that a dataflow computer is still hypothetical and no such computer could be build till now. Somebody else has written, that there ARE dataflow computers available. So where's the truth ? If there exist some dataflow computers : is there any specification around for downloading (ftp) ? Thanks for any replies ! -Peter- E-Mail : Kohut1@urz.unibas.ch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: [Q] Problem with parallel threads with 3L-C v 2.1.1 From: andreas@nlp.physik.th-darmstadt.de (Andreas Billo) Reply-To: andreas@nlp.physik.th-darmstadt.de Organization: Institut fuer Angewandte Physik, TH-Darmstadt, Germany Follow-Up: comp.parallel Date: Thu, 4 Nov 1993 16:40:30 GMT Approved: parallel@hubcap.clemson.edu The program is starting four parallel threads. The thread RecS blocks the other threads in the following condition, if the for-loop contains a lot of program code. Although the code is not executed. If we insert a break condition in the for-loop, the other threads are executed after finishing the for-loop. Thanks for any ideas and hints. andreas main() { StartAdr2=malloc( 300000 ); thread_start( FMain1, StartAdr2, 10000, ... ); thread_start( FMain2, StartAdr2+10000, 10000, ... ); thread_start( FMain3, StartAdr2+20000, 10000, ... ); } void FMain1( StartAdr ) char *StartAdr; { ... thread_start( .. ) thread_start( .. ) thread_start( RecS, StartAdr2+50000, 10000, ... ) } void FMain2( StartAdr ) char *StartAdr; { ... par_print("..."); } void FMain3( StartAdr ) char *StartAdr; { ... par_print("..."); } void RecS( ... ) { ... for(;;){ if (1 == 2) { code}; }; } --- --------------------------------------------------------- | | | Name: Andreas Billo | | Organization: Institut fuer Angewandte Physik | | Nichtlineare Physik | | TH Darmstadt | | Address: Schlossgartenstr. 7 | | City: 64289 Darmstadt | | Country: Germany | | Phone: +49 - +6151 - 164086 | | Fax: +49 - +6151 - 164534 | | Internet: andreas@nlp.physik.th-darmstadt.de | | | --------------------------------------------------------- IBM: Iesus Babbage Mundi, Iesum Binarium Magnificamur. AMDG: Ad Maiorem Dei Gloriam? Von wegen Ars Magna, Digitale Gaudium! IHS: Iesus Hardware & Software! Casaubon Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: golabb@cs.curtin.edu.au (Bart Golab) Subject: Mapping binary trees onto a mesh Organization: Curtin University of Technology Keywords: binary trees, meshes Hello, I am in need of help. I am currently working on a project which involves embedding binary trees into (faulty) meshes. However, I have great difficulty finding articles on this subject (I suspect not much work has been done in this particular area). Is there anybody out there who is also involved with mapping of different multiprocessor architectures? If you have references of books, journal articles or anything that discusses mapping of binary trees onto faulty/non-faulty meshes (or vice versa), I would gladly hear from you. I really hope someone can help me out in this matter. To those who can, thanks in advance. ............. Bart --------------------------------------------------------------------------------------------- Bart Golab Curtin University of Technology School of Computing Department of Computer Science Perth, Western Australia Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stgprao@st.unocal.COM (Richard Ottolini) Subject: Re: AI and Parallel Machines Message-ID: <1993Nov5.153315.2624@unocal.com> Organization: Unocal Corporation In article <1993Nov4.152811.24420@hubcap.clemson.edu> angelo@carie.mcs.mu.edu (Angelo Gountis) writes: >Hello All, > >I am looking for references regrading the impact parallel processing has >had on projects involving AI. I realize this is rather vague but I have >not been able to narrow it down much from the information I have found as >of now. I want to approach this from the angle of what parallel >processing has allowed AI to achieve that would not be fessible/possible >with out it. Thinking Machines started as an A.I. company. They are one of the more successful parallel computing companies. There customer base is more scientific computing these days. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sabu@panix.com (Sabu Bhatia) Subject: Linda for ALPHA(AXP)/OPENvms Organization: PANIX Public Access Internet and Unix, NYC Hi, Does anyone know if "Linda" has been ported to ALPHA(AXP)/OPNvms. If it has been an ftp site or vendor name/phone number shall be greatly appreciated. best regards, sabu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bina@cs.buffalo.edu (Bina Ramamurthy) Subject: message rate Sender: nntp@acsu.buffalo.edu Nntp-Posting-Host: centaur.cs.buffalo.edu Organization: State University of New York at Buffalo/Comp Sci Date: Fri, 5 Nov 1993 16:55:02 GMT Apparently-To: comp-parallel@cis.ohio-state.edu I am simulating Error detection and Recovery in a distributed system. Please email me references that will give me some idea about the realistic message rates (range) ? I will summarize the responses. bina Bina Ramamurthy bina@cs.buffalo.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bellcore!flash.bellcore.com!bhatt@uunet.UU.NET (Sandeep Bhatt) Subject: DIMACS Implementation Challenge III Organization: /u/bhatt/.organization Call for Participation THE THIRD DIMACS INTERNATIONAL ALGORITHM IMPLEMENTATION CHALLENGE In conjunction with its Special Year on Parallel Computing, the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) invites participation in an international Implementation Challenge to study effective parallel algorithms for combinatorial problems. The Implementation Challenge will take place between November 1993 and September 1994. Participants are invited to carry out research projects related to the problem areas specified below and to present research papers at a DIMACS workshop to be held in October 1994. A refereed workshop proceedings will be published. RESEARCH PROJECTS. The use of massive parallelism in discrete combinatorial applications has received far less attention than numerical applications. Despite the large body of theoretical work on parallel algorithms for combinatorial problems, it is unclear what kinds of parallel algorithms will be most effective in practice. The goal of this challenge is to provide a forum for a concerted effort to study effective algorithms for combinatorial problems, and to investigate the opportunities for massive speedups on parallel computers. The challenge will include two problem areas for research study. 1. Tree searching algorithms Examples: game trees, combinatorial optimization. 2. Parallel algorithms for sparse and dynamic graphs Examples: minimum spanning trees, shortest paths. Participants are welcome to select applications other than the examples given above. The aim should be to pick an application which presents clear technical obstacles to naive parallelization, and to pick large problem instances to warrant the use of massive parallelism. DIMACS SUPPORT. The DIMACS advisory committee will provide feedback on proposals, and DIMACS facilities will serve as a clearing house for exchange of programs and communication among researchers. DIMACS cannot provide financial support for research projects. DIMACS is currently investigating possibilities for participants to access massively parallel processors at the NSF Supercomputing Centers across the US. HOW TO PARTICIPATE. For more information about participating in the Implementation Challenge, send a request for the document "General Information" (available November 15, 1993) to challenge3@dimacs.rutgers.edu. Request either LaTeX format (sent through email) or hard copy (sent through U. S. Mail), and include your return address as appropriate. Challenge materials will also be available via anonymous FTP from DIMACS, and we expect most communication with respect to the Challenge to take place over the Internet. ADVISORY BOARD. A committee of DIMACS members will provide general direction for the Implementation Challenge. Committee members include Sandeep Bhatt, Bellcore and Rutgers University (Coordinator) David Culler, U.C. Berkeley David Johnson, ATT-Bell Laboratories S. Lennart Johnsson, Thinking Machines Corp. and Harvard University Charles Leiserson, MIT Pangfeng Liu, DIMACS. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: prakash@cis.ohio-state.edu (ravi prakash) Subject: Re: CS 6400 Organization: The Ohio State University Dept. of Computer and Info. Science References: <1993Oct27.172532.20787@hubcap.clemson.edu> <1993Nov1.160439.1200@unocal.com> The specifications of CS 6400 indicate a 1.3 gigabytes/sec peak memory bandwidth. I assume that's the bandwidth of the central memory. I would like to know the I/O bandwidth between the online disk storage and the central memory? --------------------------------------------------------------------------- Ravi Prakash prakash@cis.ohio-state.edu Department of Computer and Information Science, The Ohio State University Columbus, OH 43210. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: lkaplan@tera.com (Larry Kaplan) Subject: Re: Reference pointer Organization: /etc/organization References: <1993Nov4.164216.4237@hubcap.clemson.edu> In article <1993Nov4.164216.4237@hubcap.clemson.edu> Andre.Seznec@irisa.fr (Seznec Andre) writes: >I am needing a pointer to the following paper: > >"A flexible interleaved memory design for generalized low conflict memory access", by L.S. Kaplan > This paper appears in the Proceedings of The 6th Distributed Memory Computer Conference (DMCC6): @InProceedings{Kapl, Author={L. Kaplan}, Title="A Flexible Interleaved Memory Design for Generalized Low Conflict Memory Access", BookTitle={The Sixth Distributed Memory Computing Conference}, Pages={637-644}, Year={1991}} Thanks for your interest. Note that this paper roughly describes the BBN TC2000 interleaver, not the Tera interleaver. Another paper probably needs to be written about the Tera. Laurence S. Kaplan /// ////// //////// /// /// Tera Computer Co. /// /// /// /// /// /// 400 N. 34th St. Suite 300 /// /////// /// /// ////////// Seattle, WA 98103 /// /// Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: prakash@cis.ohio-state.edu (ravi prakash) Subject: Locality of code Organization: The Ohio State University Dept. of Computer and Info. Science Would anybody please answer two questions for me : 1. What degree of locality has been observed in parallel code, i.e., if the number of instructions executed in a task is x, then what function of x is the actual number of instructions in the task(code size)? 2. What speeds, and degrees of code compression have been achieved thus far? If the code size is y megabytes, and the size of the compressed code is y', then what is the ratio between y and y', how much time is required to compress y megabytes of code and the the speed of the computer used to do the compression. Thanks, ------------------------------------------------------------------------------- Ravi Prakash Office : Bolz Hall, #319b prakash@cis.ohio-state.edu Phone : (614)292-5236 - Off. Department of Computer & Information Science, Fax : (614)292-2911 The Ohio State University, Columbus, OH 43210 ------------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sandeep@karp.albany.edu (Sandeep Kumar Shukla) Subject: system.config file for a network of SUN3s and SPARCs Organization: Computer Science Department SUNYA at Albany NY 12222 Hi, I have installed the the Distributed C which is available from the INSTITUT FUR INFORMATIK (Munchen). After the installation I am having some problem in the dcinstall program. Basically the problem is related to the system.config file. We have a number of SUN3 and a few SPARCs on our Network. I have followed the format of the system.config file in the manual provided with the package. The error message that I am getting says " The Local Host not specified in the Configuration file." I talked to the system administrator and he said that the fixed disk entry of the config file is not clear. Shouldnot that mention the partition of the file system where I have the access? I will be extremely grateful if some body can send me a sample configuration file for the Distributed C which is installed for a Network of SUN3 and SPARC stations and where the file system is on a machine called cook. Thanks Sandeep Shukla Department of Computer Science State University of NewYork at Albany US email : sandeep@cs.albany.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: kamratha@dopey.sdsc.edu (Anke Kamrath) Newsgroups: comp.parallel,comp.sys.super,comp.unix.cray Subject: Cray User Group Meeting - Spring '94, First Announcement Organization: San Diego Supercomputer Center ****************************************************** ****************************************************** ****************************************************** *** FIRST ANNOUNCMENT *** *** *** *** CRAY USER GROUP *** *** Spring '94 Conference *** *** *** *** March 14-18, 1994 *** *** San Diego, California *** *** *** *** LOCAL ARRANGEMENTS *** ****************************************************** ****************************************************** ****************************************************** Dear Colleague, The San Diego Supercomputer Center (with assistance from The Scripps Research Institute, Cray Research, Inc., and Cray Superserver Research, Inc.) is pleased to invite you to attend the Cray User Group Conference in San Diego, California, March 14-18, 1994. The theme of this year9s conference is Unparalleled Horizons. To help you with arrangements, this brochure contains information about: Conference registration How to submit abstracts for conference papers and posters San Diego area Conference hotel and how to make reservations Spring is a beautiful time of year in San Diego. Whether you like the ocean, the mountains, or the desert, all are nearby. So join us in San Diego, and expect a rewarding experience personally as well as professionally. You will receive the final local arrangements brochure with details of the technical programs in January. If you have any questions, please contact us. We9ll see you in March. Sincerely, Anke Kamrath Local Arrangements Chairperson San Diego CUG San Diego Supercomputer Center LOCATION: Princess Hotel Convention Center, San Diego, California IMPORTANT DATES Conference: March 14-18, 1994 Early Conference Registration Deadline: January 15, 1994 Hotel Registration Deadline: February 10, 1994 Papers and Posters Deadline: December 10, 1993 Late Conference Registration Deadline: February 18, 1994 Last Day to Cancel: February 18, 1994 SPONSORED BY THE SAN DIEGO SUPERCOMPUTER CENTER A National Laboratory for Computational Science and Engineering To the members of CUG: It is my pleasure to cordially invite you to attend the 33rd Cray User Group meeting in San Diego, California, March 14-18, 1994. This years theme "Unparalleled Horizons" will challenge us all as we look ahead to unequaled progress in high-performance computing technology. Calls for Papers and Posters for the Conference are included at the back of this brochure. The deadline for abstracts and poster descriptions is December 10, 1993. Papers are presented in 30-minute time slots as a part of a topical formal session. The poster session is an opportunity to present your work in an informal, unstructured setting using a visual representation (a poster) rather than a paper. There will be an award for the best poster presentation. The Program Committee is planning a stimulating agenda. In addition to the successful Monday morning tutorials, the first day of the Conference will include a "What is CUG" discussion and an Advisory Council meeting. Tuesday morning, Astronaut Sally Ride will be Keynote Speaker. General sessions and special interest tracks will be held each day. The Conference concludes Friday at noon. Birds-Of-a-Feather (BOF) sessions will be scheduled to bring together those people interested in discussing specific issues. If you wish to organize a BOF at the Conference, please contact me. A room for BOFs will be available. We welcome you and hope to see you in San Diego. Jean Shuler Program Chairperson CUG Vice President Lawrence Livermore National Laboratory ** CONFERENCE INFORMATION ** LOCATION: San Diego Princess Hotel Convention Center 1404 West Vacation Road San Diego, CA 92109 Conference Office For general arrangements, conference registration, payment, and technical questions: Until March 11, 1994 San Diego Supercomputer Center CUG San Diego P.O.Box 85608 San Diego, CA 92186-9784 Phone: 1-619-534-8333 Fax: 1-619-534-5152 TDD : 1-619-534-5001 E-mail: sdcug@sdsc.edu >From March 13-18 San Diego Princess Hotel Convention Center Phone: 1-619-274-7141 Fax: 1-619-581-5908 PROGRAM, PAPERS, AND POSTERS A preliminary conference program will be included in the January mailing, and the technical program will be finalized shortly before the conference. To submit a paper or poster, return the Call for Papers and Call for Posters forms at the end of this brochure as directed. All questions regarding the program should be directed to the Program Chairperson. REGISTRATION Register as soon as possible to help the Local Arrangements Committee plan a well-organized conference. All conference registrations postmarked on or before January 15, 1994 are eligible for the reduced registration fee. We encourage all attendees to take advantage of this reduced fee. All registrations, with payment, must be postmarked on or before February 18th, 1994, using airmail, first-class postage. Confirmation will be sent after payment is received. CANCELLATION Cancellation requests received by the CUG office on or before February 18, 1994 will be refunded in full. Cancellations received after February 18 will not be refunded. REGISTRATION DESK Pick up your registration materials and a conference badge at the Registration Desk in the Princess Convention Center in the Princess Ballroom foyer. You must wear the conference badge to participate in CUG sessions and events. CONFERENCE OFFICE HOURS Sunday, March 13 4:00 pm to 7:00 pm Monday, March 14 8:00 am to 7:00 pm Tuesday, March 15 8:00 am to 6:00 pm Wednesday, March 16 8:00 am to 5:30 pm Thursday, March 17 8:00 am to 5:00 pm Friday, March 18 8:00 am to 12:30 am ** FACILITIES *** MESSAGES The conference office at the Princess Hotel will be staffed throughout the conference to assist with special requirements. Incoming telephone calls may be directed to the office at 1-619-274-7141. Messages will be posted near the registration area. E-MAIL AND PERSONAL COMPUTERS VT100 compatible terminals will be available to allow participants to send and receive e-mail and use the Internet. A Macintosh with Persuasion, MS Word, MS Excel, and MacDraw II and a printer will be available. COPYING A copy machine will be available for making limited copies. If you have handouts or documentation to distribute at the conference, please bring a sufficient number of copies with you. DINING SERVICES Refreshment will be available during breaks throughout the conference. Breakfast will be served Tuesday through Friday, and lunch Tuesday through Thursday. Food and drinks will be served at each of the social events. Special dietary requirements should be specified on the conference registration form. There are restaurants and bars located on the hotel grounds. If you are interested in sampling some of San Diego's excellent Mexican and Californian cuisine, ask the Concierge for recommendations. SPECIAL NEEDS The Princess Hotel is in compliance with the American Disabilities Act (ADA). If you have any special needs, please indicate them on the conference registration form. ** HOTEL INFORMATION ** The conference hotel is the San Diego Princess Hotel and Conference Center, located in the center of San Diego on a forty-four acre island in Mission Bay. The accommodations consist of bungalows nestled among beautiful lagoons and lush tropical landscaping. A block of rooms at a special conference rate has been reserved (including a limited number at a government rate). The conference room rates are only available for reservations made before February 10, 1994. We recommend that you stay at the Princess, as there are no other hotels within walking distance. If the room block is filled, the hotel will attempt to locate you at another nearby hotel. Room rates at other hotels cannot be guaranteed. Given the likelihood that attendance at the conference will be large, we recommend you reserve your room as soon as possible. For your convenience, use the Hotel Registration form at the end of this brochure. ** TRANSPORTATION ** San Diego is a large city, and public transportation is not very practical to get around the area. If you plan to explore San Diego or sightsee during your stay, we recommend you rent a car. The San Diego International Airport (Lindbergh Field) is within 10 minutes drive of the Princess Hotel. SUPERSHUTTLE OR TAXI If you do not rent a car at the airport, we recommend you use SuperShuttle to get to the Princess Hotel from the airport. The Shuttle cost is $6.00 each way. To arrange for the shuttle: Find the hotel courtesy phoneboard located in the baggage claim area. Press #69 and the operator will direct you to the nearby SuperShuttle loading area. Or call 278-5700 from any public telephone. When returning to the airport, make advance reservations for SuperShuttle. Ask the Concierge or front desk to book your return reservation, or call 278-8877 to make a reservation yourself. Taxis are available immediately outside the baggage claim area at the airport. The taxi fare to the Princess is about $10.00. DIRECTIONS FROM AIRPORT If you are driving from the airport, take Harbor Drive South (towards downtown San Diego) to Grape Street. At the light turn left. Follow the signs to Interstate 5 North. Take Interstate 5 North to Sea World Drive. At the top of the offramp at the light, turn left and go over the bridge onto Sea World Drive. Follow Sea World Drive to West Mission Bay Drive/Ingraham Street. Veer to the right. Stay on Ingraham until you reach West Vacation Road, where you will see the sign for the San Diego Princess Hotel. Turn left into the hotel grounds. FROM LOS ANGELES If you are driving from the Los Angeles area, take Interstate 5 South to Sea World Drive. At the top of the offramp at the light turn right. Follow Sea World Drive to West Mission Bay Drive/Ingraham Street. Veer to the right. Stay on Ingraham until you reach West Vacation Road, where you will see the sign for the San Diego Princess Hotel. Turn left into the hotel grounds. TRAVEL TO MEXICO Persons with U.S. citizenship may freely visit nearby Mexico (17 miles from downtown) with only normal identification. If you are not a U.S. citizen, you9ll need to carry your passport and have a multiple entry visa for the U.S. to visit Mexico. If you leave the U.S. with a single entry visa, you will not be able to return to the U.S. from Mexico; this is considered a second entry. ** SOCIAL EVENTS ** CRI RECEPTION All participants and guests are invited to a Monday evening reception sponsored by CRI. Newcomer9s Reception All new CUG member sites and first-time CUG attendees are invited to a reception Tuesday evening on the Governor9s Lawn. NIGHT-OUT The traditional CUG Night Out on Wednesday (6:00 until 10:15) is a cruise on San Diego Bay with dinner, music, and spectacular views of San Diego's downtown skyline and Coronado Bridge. The Night Out is included with registration, but additional tickets for guests must be purchased separately. Register and purchase any additional tickets as early as possible to guarantee space for guests. GUEST PROGRAMS, TOURS, AND OTHER ACTIVITIES If you plan to extend your stay and vacation in San Diego, the Princess Hotel offers many activities, including Botanical walks around the island 18-hole golf course Fitness Center Jogging and par course Bicycles and Quadracycles Tennis courts Swimming pools and whirlpools Shuffleboard Croquet course Wind surfing, sailing, and power and paddle boats Some of these activities are free, and others are provided at an additional cost. Ask the Concierge for information about their fitness programs and recreation pass. The Princess Hotel Concierge staff is also available to assist you with dinner reservations, directions, and tours. Discounted tickets for the San Diego Zoo, Wild Animal Park, and Sea World can be purchased from the Concierge Desk on the day you visit each attraction. The staff can also arrange fishing, scuba diving, water skiing, or golf at one of several championship courses located within 30 minutes of the hotel. Shopping opportunities in the area include Seaport Village on San Diego Bay, Horton Plaza in downtown San Diego, and Old Town. You will need transportation to and from these areas, so plan to rent a car or hire a taxi. CLIMATE/CLOTHING San Diego has mild temperatures all year round. You rarely need a topcoat or raincoat. Evenings can be cool, so bring a sweater or jacket. The temperatures in March can be warm, so shorts and swimwear may be desirable. Average temperatures range from 50-66 F (or 10-19 C). Most San Diego restaurants welcome casual attire. ** REGISTRATION INFORMATION ** REGISTRATION Complete the registration form (next page) and mail or fax it to CUG San Diego. Conference fees are due with your registration. All payment must be in U.S. dollars from checks drawn on U.S. banks or by electronic funds transfer. Credit cards or invoices are not accepted. PAYMENT BY CHECK Make checks payable to "CUG San Diego". Indicate your CUG site code on your check, and send it with your registration form. Be sure all currency conversions and transmission costs have been paid by your installation site. PAYMENT BY ELECTRONIC FUNDS TRANSFER You may pay conference fees by transferring the appropriate amount (increased by $8.50 to cover the transfer fee) to: CUG San Diego Bank of America La Jolla Plaza Branch # 1102 4380 La Jolla Village / 100 San Diego, CA 92122 Account # 11027 04167 Routing # 121000358 Be sure to include your name and site on the order. Send a copy of the transfer order with your registration form. ADDRESSES Local Arrangements: Chairperson: Anke Kamrath Coordinator: Ange Mason CUG San Diego San Diego Supercomputer Center P.O.Box 85608 San Diego, CA 92186-9784 USA Phone: 1-619-534-8333 Fax: 1-619-534-5152 TDD: 1-619-534-5001 E-Mail: sdcug@sdsc.edu Program: Chairperson: Jean Shuler National Energy Research Supercomputer Center (NERSC) P.O.Box 5509 L-561 Lawrence Livermore National Laboratory Livermore, CA 94551 USA Phone: 1-510-423-1909 Fax: 1-510-422-0435 E-Mail: shuler@nersc.gov ** CONFERENCE REGISTRATION FORM ** Early Registration Deadline: January 15, 1994 Late Registration Deadline: February 18, 1994 Please type or block print separate registration forms for each Conference attendee. Mail or fax the registration with a check or a copy of a funds transfer order to the following address: CUG San Diego, Ange Mason San Diego Supercomputer Center P.O. Box 85608 San Diego, CA 92186-9784 USA Phone: 1-619-534-8333, Fax: 1-619-534-5152 TDD: 1-619-534-5001 __________________________________________________________________________ Full Name (Last, First) __________________________________________________________________________ Organization Name CUG Site Code (Mandatory) __________________________________________________________________________ Department* Mail Stop __________________________________________________________________________ Signature of Installation Delegate CUG Site Code (If not employed by member site or CUG) __________________________________________________________________________ Address __________________________________________________________________________ City State/Province Postal/Zip Code Country __________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number) __________________________________________________________________________ Electronic Mail Address __________________________________________________________________________ Guest Name __________________________________________________________________________ Special Dietary Requirement (Please Specify) __________________________________________________________________________ Other Special Needs (Please Specify) __________________________________________________________________________ Emergency Contact and Phone Number ___ Check here to have your name/address omitted from the official CUG Proceedings ___ Check here if you are a new member or first-time attendee Early Registration (before January 15, 1994) $550 Late Registration (between Jan.15-Feb.18,1994) $600 A. Registration fee: $ ______ Additional copies of Proceedings $ 30 B. Additional Proceedings cost (quantity ___): $ ______ Additional guest tickets for Night Out $100 C. Additional guest ticket cost (number: ___): $ ______ D. For electronic funds transfer fee, add $8.50: $ ______ E. Payment due, in U.S. funds only (A + B + C + D): $ ______ ** CALL FOR PAPERS *** Deadline: December 10, 1993 Please type or block print the information requested on this form. Mail or fax to the Program Chair: Jean Shuler National Energy Research Supercomputer Center P.O. Box 5509 L-561 Lawrence Livermore National Laboratory Livermore, CA 94551 USA Phone: 1-510-423-1909 Fax: 1-510-422-0435 E-mail: shuler@nersc.gov ____________________________________________________________________________ NAME ____________________________________________________________________________ ORGANIZATION CUG SITE CODE (MANDATORY) ____________________________________________________________________________ DEPARTMENT MAIL STOP ____________________________________________________________________________ ADDRESS ____________________________________________________________________________ CITY STATE/PROVINCE POSTAL/ZIP CODE COUNTRY ____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): ____________________________________________________________________________ ELECTRONIC MAIL ADDRESS ____________________________________________________________________________ TITLE OF PAPER ABSTRACT (TWO OR THREE SENTENCES) AUDIO/VISUAL REQUIREMENTS Video 1/2": VHS/ NTSC, PAL, SECAM Video 3/4" U-matic/NTSC 35 mm slide projector Overhead Projector Other (specify): ______________________________________ SESSION IN WHICH YOU WISH TO PRESENT General Sessions Operating Systems Applications Operations Networking Performance Management Software Tools Graphics Mass Storage Systems User Services ** CALL FOR POSTERS ** Deadline: December 10, 1993 Please type or block print the information requested on this form. Mail or fax to: Larry Eversole JPL/Cal Tech MS 301-455 4800 Oak Grove Drive Pasadena, CA 91109 USA Fax: 1-818-393-1187 E-mail: eversole@voyager.jpl.nasa.gov ____________________________________________________________________________ NAME ____________________________________________________________________________ ORGANIZATION CUG SITE CODE (MANDATORY) ____________________________________________________________________________ DEPARTMENT MAIL STOP ____________________________________________________________________________ ADDRESS ____________________________________________________________________________ CITY STATE/PROVINCE POSTAL/ZIP CODE COUNTRY ____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): ____________________________________________________________________________ ELECTRONIC MAIL ADDRESS ____________________________________________________________________________ TITLE OF POSTER _____________________________________________________________________________ SHORT DESCRIPTION ROUGH MOCK-UP OF THE POSTER ** HOTEL REGISTRATION ** CRAY USER GROUP Meeting, March 14-18,1994 Deadline: February 10, 1994 Please type or block print the information requested. Mail or fax it to the following address: San Diego Princess Hotel Accommodation Reservation Request 1404 West Vacation Road San Diego, California 92109-7994 USA Phone: 1-619-274-4630 TDD: 1-619-274-4630 FAX: 1-619-581-5929 (Allow 24 hours for confirmation) _____________________________________________________________________________ Last Name First Name _____________________________________________________________________________ Mailing Address _____________________________________________________________________________ City State/Province Postal/ZIP Code Country _____________________________________________________________________________ Daytime Phone (Country,Area Code,Number) Fax (Country,Area Code,Number): _____________________________________________________________________________ Date of Arrival Estimated time of arrival Date of Departure Standard room rate $120 Government room rate* $79 (Government ID required) Any additional persons $15 (Number _________ ) 1 Bed ___ 2 Beds ___ Smoking Room ___ Non-Smoking Room ____ All rates subject to tax. Current tax is 9%. * A limited number of government-rate rooms is available. If you qualify for a government rate room please make your reservations as early as possible to ensure a room at this rate. Required: ONE NIGHT ROOM DEPOSIT INCLUDING TAX (A) Enclose a check in the amount of one night's lodging, (plus 9% tax), or (B) Complete the credit card information, and your account will be charged in the amount of one night9s lodging, plus 9% tax, upon receipt by the hotel. This deposit guarantees first night availability and will be credited to the last night of your stay. An early check-out will forfeit deposit. Be sure arrival and departure dates are correct. (A) Amount of enclosed check $ _________________________ (B) Amount charged to credit card $ _________________________ VISA MASTER CARD Card number _______________________________ Cardholder Name ______________________________ _____________________________________________________________________________ SIGNATURE CARD EXPIRATION DATE o Hotel check-in time and guestroom availability is 4:00 pm. o Check-out time is 12:00 noon. For reservations made after February 10, 1994, or after the group block is filled, the hotel will extend the group rate based upon availability. If the Princess is sold out, the hotel will refer you to a nearby hotel. ___________________________________________________________________ SIGNATURE DATE Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mitrovic@diego.llnl.gov (Srdjan Mitrovic) Subject: Re: Teraflop and Dataflow Computers Read your question on USENET and decided to try to answer it. I believe that we are still away from Teraflop computations for "normal" applications. If anybody is going to reach it soon, than maybe it will be the Tera parallel processor that should be released somewhere in '94. The T9000 has been delayed much, I wonder how far they got. It is a question at what scale a T9000 machine can be build at all. Of course, it is possible to get Teraflop computation with embarassing parallelism when enough fast processors are interconnected (Internet as interprocess communication mechanism would suffice). There have been several dataflow computers built, but the dynamic ones are far from being competitive. There are static dataflow computers (like done by Gunzinger at ETH Zuerich) that provide good result but lack flexibility. I think that in signal processing we will have some kind of simple static dataflow computers implemented. Most recent trends go toward a mixture of dataflow and conventional architectures, the so-called multithreading computers. I used to work in project ADAM at ETH Zuerich where such kind of architecture has been developped. Two soon-to-be commercialized multithreading architectures are Tera and *T (Motorola). Hope I helped a little bit. Regards Srdjan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stgprao@st.unocal.COM (Richard Ottolini) Subject: Re: Teraflop and Dataflow Computers ? Organization: Unocal Corporation References: <1993Nov5.133445.9491@hubcap.clemson.edu> In article <1993Nov5.133445.9491@hubcap.clemson.edu> kohut1@urz.unibas.ch writes: >Recently I read an old magazine about supercomputing (it was published in >1990 or so). There was an article about Teraflop computers which should be >commercialiy available in 1993. Two approaches were mentioned : 1) a >new version of SIMD type Connection Machine and 2) an MIMD type European >Computer build of 64k Inmos T9000 Transputers. >I never heard about any success (or failures) of these projects. So : >are somewehre any Teraflop computers around yet ? > Two general purpose computers claim to have run code at 100+ Gigaflops in 1993: The Intel Paragon and Fujitsu VP. Following the 10-5 observation- 10x speed increase every 5 years- these are a little ahead of the curve. >My second question concernes Dataflow computers. In some literature somebody >wrote that a dataflow computer is still hypothetical and no such computer >could be build till now. Somebody else has written, that there ARE dataflow >computers available. So where's the truth ? If there exist some dataflow >computers : is there any specification around for downloading (ftp) ? Superscalar chips implement some elements of dataflow in scheduling "simultaneous" instructions. No heavy duty commercial stuff yet outside of some samples at MIT, Japan and a other places. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: dbg@SLAC.Stanford.EDU (David Gustavson) Subject: SCI coherence--RamLink I/O Organization: SLAC Computation Research Group Newsgroups: comp.arch,comp.parallel,comp.sys.super SCI Coherence Works: At the recent NASA-Ames Cluster Workshop, a well-known senior computer architect said (approximately): "Convex's use of SCI isn't a real test of SCI because Convex's SPP is only using SCI as a transport mechanism and is not using SCI cache coherence; thus, SCI is still far from having a meaningful application, remains a research toy, and may remain untested for years." I was very surprised to hear this, so I asked Convex if there was any basis for this statement, and I received the following response: >SPP-1 is fully cache coherent. We are using every feature the > Dolphin design can support, plus a few we've patched around. > The inter-node coherence is, in fact, SCI. The only thing I can think of that may have confused this issue is that Convex groups several PA/RISC processors in each SCI node, and naturally uses their native coherence mechanisms at that level, within the node. Translation from proprietary coherence mechanisms to SCI coherence has to be done at a secondary cache for all existing processor chips, because their snoop-based coherence mechanisms don't scale past a few processors (which share a local bus). (Extending snooping coherence mechanisms to multiple buses can be done, but is expensive and scales badly. Such bus bridges must snoop each bus on behalf of the other, and have their own directories. SCI bridges, on the other hand, need know nothing about coherence: they just pass packets based on the 16 high bits of the address information, the same whether coherence is being used or not.) Furthermore, Convex isn't the only company that's using SCI. Though only Convex and Unisys have said so publicly, others are hard at work too, on unannounced products. I doubt if any other coherence protocol has been tested so thoroughly by so many people before going into production computers! SCI Coherence is Optional: A lot of people are frightened of cache coherence, and cite SCI's support of coherence as a reason they can't use SCI. This is quite irrational--if you need coherence, better to use a widely tested, well understood (by the experts, anyway!), and well specified mechanism like SCI. If you don't need it, you've wasted 2 bits of the SCI packet header. Big deal. If the average packet size is 32 bytes in an incoherent system (16 header plus 16 data) you've wasted only 2/256, less than one percent of the bandwidth. And in return for that you leave the option open to add coherence later if and where you need it. (Coherent and incoherent operations can be intermixed at will in SCI systems.) If you argue that you could design a more efficient system than SCI if it were designed from the ground up to eliminate coherence support, I say you're mistaken. If you compress the header as much as you can, you're not going to get it down to 8 bytes, the next point with a significant speed gain, because the address alone is 8 bytes in a modern 64-bit system and 2 of SCI's 16 "header" bytes are really Error-Detecting Code that follows the data, at the end of the packet. If you only shave a few bytes off, you actually lose, because SCI gains a lot of speed by doing all of its buffer allocation in uniform blocks. If you have to subtract buffer pointers to make storage decisions, you're going to have to slow the transmission speed or add pipeline stages that add latency delay. SCI uses the RISC principle of keeping things simple as possible in order to run faster--making the chips more complicated in order to save a few percent of bandwidth is a big loser when you have to slow the clock as a result! RamLink is Cheap and Simple: However, if you don't need SCI's scalable multiprocessor features, for example if you just want a faster-than-bus processor I/O-bus replacement, you could use RamLink. RamLink avoids the multiprocessor and multimaster issues by allowing only one controller on a ringlet (1 to 62 devices). It uses a byte-wide path, initially at 500 MBytes/s, and though it was designed primarily as a high performance memory interface it includes enough capability (e.g. interrupts) to make it useful for I/O systems and even bridges to other buses. RamLink (IEEE P1596.4 working group, chaired by Hans Wiggers of HP, wiggers@hplhaw.hpl.hp.com) uses low-voltage differential signaling, developed in the LVDS project (IEEE P1596.3 working group, chaired by Stephen Kempainen of National, stephen@lightning.nsc.com). This signaling is already in use in National's QuickRing chips. This is a very clean and fast point-to-point signaling technology that copes well with connectors, package pins, and even cable connections. Future versions can be expected to run at even higher speeds (we know that in a few years people will want multi-GByte/s links). It uses quarter-volt swings centered on 1.2 volts, for compatibility with several generations of CMOS, BiCMOS, bipolar, and GaAs technology. Dave Gustavson -------------------------------------------------------------- -- David B. Gustavson, Computation Research Group, SLAC, POB 4349 MS 88, Stanford, CA 94309 tel (415)961-3539 fax (415)961-3530 -- What the world needs next is a Scalable Coherent Interface! -- Any opinions expressed are mine and not necessarily those of the Stanford Linear Accelerator Center, the University, or the DOE. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: toyin@mailserver.aixssc.uk.ibm.com (Toyin omo Adelakun) Subject: Re: Tuple space, What is it ???? Organization: IBM AIX Systems Support Centre, UK. References: <1993Nov5.133422.9205@hubcap.clemson.edu> kjcooper@gandalf.rutgers.edu (Kenneth Cooper) writes: : : Can anyone tell me what Tuple space is ??? : Kenny Cooper : Tuple space, as used by Linda programming extensions, is a means of sharing data--"passing messages"--between possibly concurrent processes. A tuple is an arbitrary data structure (the stress being on _structure_): (String, String) is a two-tuple or pair, and (String, Int, String, Real, Real) is a quintuple. Processes "drop tuples into TS" for other communicating processes to use. Receiving processes read tuples from TS by a process of pattern-matching. There are destructive and non-destructive reads from TS. Strictly, the data structures of our earlier illustration are best described as tuple _schemata_: associated tuples may be ("Athens", "OH") and ("Mick", 33, "1 Arbor Road", 77.0, 185.0) The scheme goes something like this: Writer does an Output("Athens", "OH") and some companion process, Reader, does Input(String, String) Reading a tuple from TS involves matching the individual "fields", and (to the best of my rusty recollection) delivers the "next available tuple" to the reader process. For this reason, Reader may end up not with "("Athens", "OH")", but with '("La Jolla", "CA")'--which Writer may've dropped into TS previously. TS may be regarded as a pool of memory locations. The contents of these locations are more relevant than their addresses. Ref: 1) Ben-Ari, M. _Principles of Concurrent and Distributed Programming_, Prentice-Hall, 199x. I hope that is of some help. Regards, Toyin. PS: I'm not confident enough to spell the name of the author of the original Linda papers, but I know it starts "Gele..." (my apologies). -- omo Adelakun, Toyin K. Phone: (44-)0256-343 000 x319125 UKnet: toyin@aixssc.ibm.co.uk pOBODY'S Nerfect - Anon. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ludwig@Informatik.TU-Muenchen.DE (Thomas Ludwig) Subject: Paragon Message-Passing Environment for Workstations Organization: Technische Universitaet Muenchen, Germany Paragon Message-Passing Environment for Workstations Available In order to develop applications for Paragon systems and to run Paragon applications on a network of workstations we have developed the NXLIB programming library. We are now releasing V1_0 of the package under the terms of the GNU license agreement to the Paragon and workstation community (currently an implementation for Sun SPARC has been done, but ports to further machines will follow!). The sources of the library and a User's Guide are available via anonymous ftp from ftpbode.informatik.tu-muenchen.de. The related files are located in the NXLIB directory. To establish personal contacts to the authors the nxlib@informatik.tu-muenchen.de email address can be used. ===== Stefan Lamberts, Georg Stellner, Dr. Thomas Ludwig, Prof. Dr. Arndt Bode Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,comp.theory,comp.org.ieee,info.theorynt From: das@ponder.csci.unt.edu (Sajal Das) Subject: Call for Papers Reply-To: comp-org-ieee@zeus.ieee.org Organization: University of North Texas, Denton ******************* * CALL FOR PAPERS * ******************* JOURNAL OF COMPUTER & SOFTWARE ENGINEERING -------------------------------------------- SPECIAL ISSUE on PARALLEL ALGORITHMS & ARCHITECTURES (Tentative Publication Date: January 1995) Due to fundamental physical limitations on processing speeds of sequential computers, the future-generation high performance computing environment will eventually rely entirely on exploiting the inherent parallelism in problems and implementing their solutions on realistic parallel machines. Just as the processing speeds of chips are approaching their physical limits, the need for faster computations is increasing at an even faster rate. For example, ten years ago there was virtually no general-purpose parallel computer available commercially. Now there are several machines, some of which have received wide acceptance due to reasonable cost and attractive performance. The purpose of this special issue is to focus on the desgin and analysis of efficient parallel algorithms and their performance on different parallel architectures. We expect to have a good blend of theory and practice. In addition to theoretical papers on parallel algorithms, case studies and experience reports on applications of these algorithms in real-life problems are especially welcome. Example topics include, but are not limited to, the following: Parallel Algorithms and Applications. Machine Models and Architectures. Communication, Synchronization and Scheduling. Mapping Algorithms on Architectures. Performance Evaluation of Multiprocessor Systems. Parallel Data Structures. Parallel Programming and Software Tools. *********************************************************************** Please submit SEVEN copies of your manuscript to either of the * Guest Editors by May 1, 1994: * * *********************************************************************** Professor Sajal K. Das || Professor Pradip K. Srimani * Department of Computer Science || Department of Computer Science * University of North Texas || Colorado State University * Denton, TX 76203 || Ft. Collins, CO 80523 * Tel: (817) 565-4256, -2799 (fax) || Tel: (303) 491-7097, -6639 (fax) * Email: das@cs.unt.edu || Email: srimani@CS.ColoState.Edu * *********************************************************************** INSTRUCTIONS FOR SUBMITTING PAPERS: Papers should be 20--30 double spaced pages including figures, tables and references. Papers should not have been previously published, nor currently submitted elsewhere for publication. Papers should include a title page containing title, authors' names and affiliations, postal and e-mail addresses, telephone numbers and Fax numbers. Papers should include a 300-word abstract. If you are willing to referee papers for this special issue, please send a note with research interest to either of the guest editors. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - November 8th meeting notice Organization: NETCOM On-line Communication Services (408 241-9760 guest) Date: Sun, 7 Nov 1993 12:28:36 GMT Apparently-To: comp-parallel@uunet.uu.net HyperC - An Excellent Data-Parallel Language from France On November 8th, founder Philippe Clermont of HyperParallel Technologies will present HyperC, a data-parallel extension to ANSI-C. HyperC is currently available in single workstation versions; it will be available by the end of this year for workstation farms and on parallel computers in 1994. Philippe will discuss the use of the data-parallel programming model on SIMD and MIMD machine architectures, the underlying synchronous mechanism and the advantages it offers in debugging. He'll talk about algorithm design and he will examine the way in which HyperC compiler technology enables developers to target such parallel architectures as SIMD and MIMD distributed memory and MIMD shared memory. HyperParallel will offer a seminar to a select group of PPC members in the near future; Philippe's presentation will tell you if such a seminar is for you. Additionally, a new American company is being formed to further develop and market this excellent technology. PPC has already received assurances that its members will be strongly considered for principal positions. A discussion of member entrepreneurial projects currently underway will begin at 7:15PM and the main meeting will start promptly at 7:45PM at Sun Microsystems at 901 San Antonio Road in Palo Alto. This is just off the southbound San Antonio exit of 101. Northbound travelers also exit at San Antonio and take the overpass to the other side of 101. Please be prompt; as usual, we expect a large attendance; don't be left out or left standing. There is a $8 fee for non-members and members will be admitted free. -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - What Is It? Organization: NETCOM On-line Communication Services (408 241-9760 guest) The PARALLEL Processing Connection is an entrepreneurial association; we mean to assist our members in spawning very successful new businesses involving parallel processing. Our meetings take place on the second Monday of each month at 7:15 PM at Sun Microsystems at 901 South San Antonio Road in Palo Alto, California. Southbound travelers exit 101 at San Antonio ; northbound attendees also exit at San Antonio and take the overpass to the other side of 101. There is an $8 visitor fee for non- members and members ($40 per year) are admitted free. Our phone number is (408) 732-9869 for a recorded message about upcoming meetings. Since the PPC was formed in late 1989 many people have sampled it, found it to be very valuable, and even understand what we're up to. Nonetheless, certain questions persist. Now, well into our fourth year of operation, perhaps we can and should clarify some of the issues. For example: Q. What is PPC's raison d'etre? A. The PARALLEL Processing Connection is an entrepreneurial organization intent on facilitating the emergence of new businesses. PPC does not become an active member of any such new entities, ie: is not itself a profit center. Q. The issue of 'why' is perhaps the most perplexing. After all, a $40 annual membership fee is essentially free and how can anything be free in 1993? What's the payoff? For whom? A. That's actually the easiest question of all. Those of us who are active members hope to be a part of new companies that get spun off; the payoff is for all of us -- this is an easy win-win! Since nothing else exists to facilitate hands-on entrepreneurship, we decided to put it together ourselves. Q. How can PPC assist its members? A. PPC is a large technically credible organization. We have close to 100 paid members and a large group of less regular visitors; we mail to approximately 500 engineers and scientists (primarily in Silicon Valley). Major companies need to maintain visibility in the community and connection with it; that makes us an important conduit. PPC's strategy is to trade on that value by collaborating with important companies for the benefit of its members. Thus, as an organization, we have been able to obtain donated hardware, software, and training and we've put together a small development lab for hands-on use of members at our Sunnyvale office. Further, we've been able to negotiate discounts on seminars and hardware/software purchases by members. Most important, alliances such as we described give us an inside opportunity to JOINT VENTURE SITUATIONS. Q. As an attendee, what should I do to enhance my opportunities? A. Participate, participate, participate. Many important industry principals and capital people are in our audience looking for the 'movers'! For further information contact: -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zxu@ringer.cs.utsa.edu (Zhichen Xu) Subject: information of shared memory machines Organization: Univ of Texas at San Antonio I would like to get the information of all existing shared-memory machines. Your sending me materials or give me a list of those machine are greatly appreciated. email address: zxu@dragon.cs.utsa.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ds@netcom.com (David Schachter) Subject: Video on "Faster Messaging in Shared-Memory Multiprocessors" Organization: NETCOM On-line Communication Services (408 241-9760 guest) Tape #25: Optimizing Memory-Based Messaging for Scalable Shared Memory Multiprocessor Architectures September 8, 1993, 71 minutes >From the meeting notice: "Passing messages between programs, referred to as memory-based messaging, is a technique for efficient interprocess communica- tion that takes advantage of memory system performance. Conventional OS sup- port for this approach, however, is inefficient for large scale shared memory multiprocessors and is too complex to be effectively supported in hardware. "This talk presents joint work of the speaker and David Cheriton regarding optimized support for memory-based messaging, in both hardware and software, that provides interprocess communication performance comparable with memory system performance on a scalable memory architecture. [We] describe the com- munication model, its implementation and hardware support, as well as the performance benefits derived from the model." Bob Kutter recently completed his Ph.D at Stanford, working with David Cheri- ton, researching high performance messaging in shared memory multiprocessors. Before this, he was lead engineer on the "9345" storage subsystem microcode project at IBM and he worked on the "3990" storage subsystem as well. He is currently employed by IBM's AdStar Division. This tape is available through the courtesy of SVNet and Bob Kutter. FOR IN- FORMATION ON SVNet MEMBERSHIP, contact Paul Fronberg at +1 408 246 1132 or Ralph Barker at +1 408 559 6202 or send mail to P.O. Box 700251, San Jose, CA 95170-0251. SVNet is a Bay Area UN*X Users Group, supported by annual member- ship fees and donations. Meetings are FREE and PUBLIC. For pricing, delivery, and other information ABOUT THIS VIDEOTAPE, contact me directly. Thank you. -- David Schachter -- ______________________________________-_____________________________________ David Schachter Internet: ds@netcom.com 801 Middlefield Road, #8 CompuServe: 70714,3017 Palo Alto, CA 94301-2916 After 10 am, voice: +1 415 328 7425 USA fax: +1 415 328 7154 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: lee@aero.org (Craig A. Lee) Newsgroups: comp.sys.super,comp.parallel,comp.parallel.pvm Subject: Virtual Machine BoF, SC `93 Organization: The Aerospace Corporation Virtual Machines Birds-of-a-Feather Meeting Supercomputing `93 3:30 - 5:00 PM Wednesday, November 17 Room A-108 Abstraction is used at every level of computation to control complexity and make the construction of larger systems more manageable. Any level of abstraction can be expressed as a virtual machine. Common examples include abstract instruction sets, programming languages, and "meta"-computers. For parallel supercomputing, virtual machines can support a variety of goals including a unified heterogeneous programming model, architecture independence, portability, and enhanced support for debuggers and performance monitoring. A number of open questions, however, surround the use of virtual machines in this arena: . Should these goals be supported by a relatively low-level set of services, a "meta"-computer coordination approach, or something in between, or all of the above? . Is an intermediate form possible that would have sufficient expressibility? . Is it possible to control virtual machine performance from within the programming model(s)? . Is it possible to use the virtual machine concept with an acceptable degree of efficiency and performance? This Birds-of-a-Feather meeting will discuss these and other related issues to clarify the direction that parallel supercomputing should take. Craig A. Lee The Aerospace Corporation 2350 East El Segundo Blvd. El Segundo, CA 90245 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Jean.Marc.Andreoli@xerox.fr (Jean Marc Andreoli) Subject: Re: Tuple space, What is it ???? Keywords: Tulpe Sender: news@xerox.fr Nntp-Posting-Host: oisans.grenoble.rxrc.xerox.com Organization: Rank Xerox Research Centre References: <1993Nov5.133422.9205@hubcap.clemson.edu> Date: Mon, 8 Nov 1993 08:46:30 GMT Apparently-To: comp-parallel@uunet.uucp In article <1993Nov5.133422.9205@hubcap.clemson.edu>, kjcooper@gandalf.rutgers.edu (Kenneth Cooper) writes: |> |> Can anyone tell me what Tuple space is ??? |> Kenny Cooper |> Basically, a tuple space is a blackboard (in the sense of blackboard systems), where the information contained in the blackboard are "tuples". A blackboard is a data repository shared by several agents which can communicate by asserting, querying, retracting data concurrently in the repository. A tuple is a flat record containing a number of arguments (arity), e.g. ["a string",18,9.2e-4] is a tuple of arity 3, amenable to an elementary form of pattern matching. The notion of tuple space has been heavily used in Linda and also in blackboard expansions of concurrent logic programming languages (e.g. Parlog, Shared Prolog). I think the expression "tuple space" was first introduced by the Linda people. -- Jean-Marc Andreoli Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Alain.Fagot@imag.fr (Alain Fagot) Newsgroups: comp.parallel Subject: Searching Parallel Debugging References Organization: Institut Imag, Grenoble, France Hello, I'm currently desiging a degugging environment for the following programming model : the computation is done by threads on several UNIX processes (behaving like virtual processors), the cooperation between threads is based on asynchronous RPC. The debugging environment will be based on (deterministic) execution replay of parallel programs. I have already designed and implemented a mechanism for recording and replaying our parallel programs. I need some pointers to previous work done for debugging of parallel programs using a similar programming model. Thanks for your help. Alain. -- Alain FAGOT |/ Alain.Fagot@imag.fr \| Oublie les plumes IMAG-LMC |\ membre du LGI /| oublie les mocassins 46 Avenue Felix Viallet |/ Projet APACHE \| pas besoin de costume 38031 GRENOBLE CEDEX 1 |\ bureau: +33 76 57 48 94 /| pour devenir indien. FRANCE |/ domicile: +33 76 85 41 09 \| Pow Wow Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kutscher@informatik.uni-stuttgart.de (Peter Kutschera) Subject: Sequoia Sender: news@informatik.uni-stuttgart.de Organization: IPVR, University of Stuttgart, Germany Date: Mon, 8 Nov 1993 12:42:24 GMT Apparently-To: hypercube@hubcap.clemson.edu I'm posting this for a friend of mine, who has no access to the internet. Have you ever heard of a parallel computer based on Motorola hardware named Sequoia? Or is it a hardware vendor's name? If anybody has a clue, any info is appreciated. Thank you in advance, Peter -- -------------------------------------------------------------------------------- Peter Kutschera IPVR, University of Stuttgart Breitwiesenstr. 20 - 22 D-70565 Stuttgart (Germany) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: janr@fwi.uva.nl (Jan de Ronde) Subject: IPS Organization: FWI, University of Amsterdam Is there anyone out there who can tell me more about IPS? That is: is it available, details on usage of IPS etc...? Thanks. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: cel@theory.lcs.mit.edu (Charles E. Leiserson) Newsgroups: comp.parallel,comp.theory,comp.arch Subject: SPAA'94 CALL FOR PAPERS Organization: MIT Lab for Computer Science SPAA'94 CALL FOR PAPERS Sixth Annual ACM Symposium on PARALLEL ALGORITHMS AND ARCHITECTURES JUNE 27-29, 1994 Cape May, New Jersey The Sixth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA'94) will be held in Cape May, New Jersey, on June 27-29, 1994. It is sponsored by the ACM Special Interest Groups for Automata and Computability Theory (SIGACT) and Computer Architecture (SIGARCH) and organized in cooperation with the European Association for Theoretical Computer Science (EATCS). CONTRIBUTED PAPERS: Contributed papers are sought that present original, fundamental advances in parallel algorithms and architectures, whether analytical or experimental, theoretical or practical. A major goal of SPAA is to foster communication and cooperation among the diverse communities involved in parallel algorithms and architectures, including those involved in operating systems, languages, and applications. The Symposium especially encourages contributed papers that offer novel architectural mechanisms or conceptual advances in parallel architectures, algorithmic work that exploits or embodies architectural features of parallel machines, and software or applications that emphasize architectural or algorithmic ideas. VENDOR PRESENTATIONS: As in previous years, the Symposium will devote a subset of the presentations to technical material describing commercially available systems. Papers are solicited describing concepts, implementations or performance of commercially available parallel computers, routers, or software packages containing novel algorithms. Papers should not be sales literature, but rather research-quality descriptions of production or prototype systems. Papers that address the interaction between architecture and algorithms are especially encouraged. SUBMISSIONS: Authors are invited to send draft papers to: Charles E. Leiserson, SPAA'94 Program Chair MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 USA The deadline for submissions is JANUARY 21, 1994. Simultaneous submission of the same research to SPAA and to another conference with proceedings is not allowed. Inquiries should be addressed to Ms. Cheryl Patton (phone: 617-253-2322; fax: 617-253-0415; e-mail: cap@mit.edu). FORMAT FOR SUBMISSIONS: Authors should submit 15 double-sided copies of a draft paper. The cover page should include (1) title, (2) authors and affiliation, (3) e-mail address of the contact author, and (4) a brief abstract describing the work. If the paper is to be considered as a vendor presentation, the words ``Vendor Presentation'' should appear at the top of the cover page. A technical exposition should follow on subsequent pages, and should include a comparison with previous work. The technical exposition should be directed toward a specialist, but it should include an introduction understandable to a nonspecialist that describes the problem studied and the results achieved, focusing on the important ideas and their significance. The draft paper--excluding cover page, figures, and references--should not exceed 10 printed pages in 11-point type or larger. More details may be supplied in a clearly marked appendix which may be read at the discretion of the Program Committee. Any paper deviating significantly from these guidelines--or which is not received by the January 21, 1994 deadline--risks rejection without consideration of its merits. ACCEPTANCE: Authors will be notified of acceptance or rejection by a letter mailed by March 15, 1994. A final copy of each accepted paper, prepared according to ACM guidelines, must be received by the Program Chair by April 8, 1994. It is expected that every accepted paper will be presented at the Symposium, which features no parallel sessions. CONFERENCE CHAIR: Lawrence Snyder, U. Washington. LOCAL ARRANGEMENTS CHAIR: Satish Rao and Yu-dauh Lyuu, NEC Research Institute. PROGRAM COMMITTEE: Gianfranco Bilardi (U. Padova, Italy), Tom Blank (MasPar), Guy Blelloch (Carnegie Mellon), David Culler (U. California, Berkeley), Robert Cypher (IBM, Almaden), Steve Frank (Kendall Square Research), Torben Hagerup (Max Planck Institute, Germany), Charles E. Leiserson, Chairman (MIT), Trevor N. Mudge (U. Michigan, Ann Arbor), Cynthia A. Phillips (Sandia National Laboratories), Steve Oberlin (Cray Research), C. Gregory Plaxton (U. Texas, Austin), Rob Schreiber (RIACS). -- Cheers, Charles Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sudy@PHOEBUS.SCA.COM (Sudy Bharadwaj) Subject: Parallel/Distributed Training Course. Reply-To: sudy@PHOEBUS.SCA.COM (Sudy Bharadwaj) Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158 Parallel/Distributed Computing Training Course Offered. SCIENTIFIC Computing Associates will offer the course "Introduction to Parallel Programming with Linda" on Thursday, December 9 at the Florida State University Supercomputer Computations Research Institute, in Tallahassee, FL. The course will run from 1:00pm to 7:00pm, and immediately follows the Cluster Workshop '93 at the same institution. The Cluster Workshop '93 ends 12/9 at 12:00pm, and the concepts discussed throughout the workshop can be put to practice in this course. The topics to be covered include: o Linda software architecture o Structuring parallel programs with Linda o Linda program development aids o Efficient Linda parallel programming The course centers around real word program examples, used to introduce and illustrate the concepts it covers. In addition, the course also includes several hands-on programming problems. The course fee of $295 includes classroom and laboratory instruction and course materials. To register for this course or to request further information about this and upcomming courses, contact SCIENTIFIC Computing Associates, Inc at: One Century Tower 265 Church Street New Haven, CT 06510 (203) 777-7442 (203) 776-4074 (fax) email: software@sca.com | | | | | Sudy Bharadwaj | One Century Tower | (203) 777-7442 | | Director of Sales | 265 Church Street | (203) 776-4074 (fax) | | Scientific Computing Assoc | New Haven, CT 06510 | email: sudy@sca.com | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zxu@ringer.cs.utsa.edu (Zhichen Xu) Subject: Seek survey of parallel computers Organization: Univ of Texas at San Antonio Any one sending me a survey of existing parallel computer architectures are greatly appreciated. Recommendation of materials of existing parallel architectures are welcomed too. email zxu@ringer.cs.utsa.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: eugene@nas.nasa.gov (Eugene N. Miya) Subject: Re: The Future of Parallel Computing Organization: NAS - NASA Ames Research Center, Moffett Field, CA References: <1993Nov4.153024.25048@hubcap.clemson.edu> In article <1993Nov4.153024.25048@hubcap.clemson.edu> ohnielse@fysik.dth.dk writes: >I often compare the use of computers for scientific purposes to the use >of experimental equipment (vacuum chambers, electron guns, what have >you). Any good experimentalist knows his equipment well, and knows how >to push it to produce the results that he is seeking. The same with >computers: A computational scientist has to know his computer >sufficiently well to make it produce results (nearly) as efficiently as >possible. >I believe there is no excuse for ignoring the hardware you use for >scientific computing. Let me play devils advocate here. I believe you are setting up an easy straw man target and knocking him down. I have a friend who is a wind tunnel experimentalist. She has to check a lab note book (paper) for her password (men check their lab books as well, too). The problem is that the equipment is not the science. If we had more readers from the comp.software-eng area, you would pull flak noting: 1) codes frequently outlive their machines (it has to be portable, only a few codes will survive the transition to parallel architectures (good riddance to that old science)). 2) if the science is every commercialized, made into an engineering product, then portability has to include heterogeneity: few people develop codes for more than one machine. I started on IBMs, then went to Univacs, and other machines. I got into science for the power of its generality. This is now a wash because of computer architecture, programs don't work across many machines. Many scientists can't be bothered by machine details. Why should I have to know the smallest magnitude floating point number? (one recent debugging problem [algorithm can't initialize at zero].) It's silly. Great tension we have between the forces of change. I wonder who will win. I know who will lose. --eugene miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov Resident Cynic, Rock of Ages Home for Retired Hackers {uunet,mailrus,other gateways}!ames!eugene Second Favorite email message: 550 Host unknown (Authoritative answer from name server): Address family not supported by protocol family A Ref: Mathematics and Plausible Reasoning, vol. 1, G. Polya The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times. Premature optimization is the root of all evil (or at least most of it) in programming. %A D. E. Knuth %T Computer Programming as an Art %J CACM? %D 1974 %K Turing award lecture 1974 %X also ACM Turing Award Lectures: The First Twenty Years pp. 33-46. And DEK's own Literate Programming. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,comp.sys.super,sci.geo.meteorology From: hart@spike.ucar.edu (Leslie Hart [FSL]) Subject: BOF at SC'93 on Parallelizing Numerical Weather Prediction Models Followup-To: hart@fsl.noaa.gov Keywords: numerical weather prediction, weather models, parallel programming Organization: NOAA/ERL/FSL, Boulder, CO We at NOAA/Forecast Systems Laboratory are parallelizing a next-generation weather prediction model. We would like to share our current experiences with others as well as learn form others. We are planning to hold a Birds of a Feather Session at Supercomputing '93 on Tuesday 16 November 1993 from 1:30 to 3:30 in room A-107. We would like to basically hold a get to know each other session. We invite 5-15 minute presentations on works in progress. We are looking more at the computer science aspect of parallelizing weather models but also want to include people from the meteorological end of the spectrum. We are interested in tools development, tools usage, acquisition of (near) realtime data, output techniques, and integrating into an overall forecasting environment. If you wish more information or wish to give a (short) presentation, please send email to hart@fsl.noaa.gov or contact me at (303) 497 7253. Regards, Leslie Hart NOAA/ERL/FSL (Contractor for Science and Technology Corp.) Boulder, CO Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: matias@research.att.com Subject: DIMACS Workshop on Parallel Algorithms (Nov 17-19) PROGRAM and REGISTRATION INFORMATION DIMACS Workshop on Parallel Algorithms: From Solving Combinatorial Problems to Solving Grand Challenge Problems November 17-19, 1993 In the context of the 1993-94 DIMACS special year on Massively Parallel Computation, a three day workshop on ``Parallel Algorithms: From Solving Combinatorial Problems to Solving Grand Challenge Problems'' will be held on November 17-19, 1994. The focus of the workshop will be the general area of parallel algorithms. The scope includes the study of basic problems in parallel computation, on the one hand, and the relevance of parallel computation to various applications, including the so-called Grand Challenge Problems, on the other hand. PARTICIPATION The workshop will be held at DIMACS at Rutgers University, Piscataway, New Jersey. DIMACS is the National Science Foundation science and technology center for discrete mathematics and computer science. It is a consortium of Rutgers and Princeton Universities, AT&T Bell Laboratories, and Bellcore. Co-organizers for this workshop are: Jim Flanagan (CAIP-Rutgers) [flanagan@caip.rutgers.edu] Yossi Matias (AT&T Bell Labs) [matias@research.att.com] Vijaya Ramachandran (U. Texas) [vlr@cs.utexas.edu] The workshop will include invited presentations and contributed talks. REGISTRATION The DIMACS Conference Center at Rutgers can accommodate about 100 participants. Subject to this capacity constraint, the workshop is open to all researchers. To register, contact Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930]. If possible, please register by NOVEMBER 10, although registration at the conference is permitted. THERE IS NO REGISTRATION FEE. PROGRAM: Wednesday, November 17 ========================= 8:00 - 8:50 Light Breakfast ------------------------- 8:55 - 9:00 Welcoming remarks from DIMACS 9:00 - 9:30 Al Thaler (NSF) [tentative] HPCC Activities and Grand Challenges 9:30 - 10:00 Gary Miller (CMU) Numeric and Combinatorial Aspects to Parallel Scientific Computation ------------------------- 10:00 - 10:30 Coffee Break ------------------------- 10:30 - 11:00 Richard Cole (NYU) 2-D Pattern Matching 11:00 - 11:30 Uzi Vishkin (Maryland & Tel Aviv) Efficient Labeling of Substrings 11:30 - 12:00 Pierre Kelsen (U British Columbia) Constant Time Parallel Indexing of Points in a Triangle 12:00 - 12:20 Pangfeng Liu (DIMACS) Experiences with Parallel N-body simulations ------------------------- 12:30 - 2:00 LUNCH ------------------------- 2:00 - 2:30 Victor Pan (CUNY) Efficient Parallel computations in Linear Algebra with Applications 2:30 - 2:50 Roland Wunderling (Berlin) On the Impact of Communication Latencies on Distributed Sparse LU Factorization 2:50 - 3:20 Ian Parberry (North Texas U) Algorithms for Touring Knights ------------------------- 3:20 - 3:50 Coffee Break ------------------------- 3:50 - 4:20 Phil Klein (Brown) A Linear-Processor Polylog-Time Parallel Algorithm for Shortest Paths in Planar Graphs 4:20 - 4:50 Edith Cohen (Bell Labs) Undirected Shortest Paths in Polylog-Time and Near-Linear Work 4:50 - 5:20 Lin Chen (USC) Graph Isomorphism and Identification Matrices: Parallel Algorithms ------------------------- 5:30 Wine and Cheese Reception ------------------------- Thursday, November 18 ========================= 8:00 - 8:45 Light Breakfast ------------------------- 8:50 - 9:30 John Board (Duke) Algorithms for Multipole-Accelerated Force Calculation in Molecular Dynamics 9:30 - 10:00 Vijaya Ramachandran (U Texas at Austin) Parallel Graph Algorithms: Theory and Implementation ------------------------- 10:00-10:30 Coffee Break ------------------------- 10:30 - 11:00 Zvi Kedem (NYU) Towards High-Performance Fault-Tolerant Distributed Processing 11:00 - 11:30 Torben Hagerup (Max Planck Inst) Fast Deterministic Compaction and its Applications 11:30 - 12:00 Phil Gibbons (Bell Labs) Efficient Low Contention Parallel Algorithms 12:00 - 12:30 Paul Spirakis (Patras) Paradigms for Fast Parallel Approximations for Problems that are Hard to Parallelize ------------------------- 12:30 - 2:00 LUNCH ------------------------- 2:00 - 2:40 Olof Widlund (NYU) Some Recent Results on Schwarz Type Domain Decomposition Algorithms 2:40 - 3:10 Jan Prins (U North Carolina at CH) The Proteus System for the Development of Parallel Algorithms 3:10 - 3:40 Yuefan Deng (SUNY at Stony Brook) Parallel Computing Applied to DNA-protein Interaction Study: A Global Nonlinear Optimization Problem ------------------------- 3:40 - 4:10 Coffee Break ------------------------- 4:10 - 4:40 Rajeev Raman (Maryland) Optimal Parallel Algorithms for Searching a Totally Monotone Matrix 4:40 - 5:10 Teresa Przytycka (Odense) Trade-offs in Parallel Computation of Huffman Tree and Concave Least Weight Subsequence 5:10 - 5:40 Vijay Vazirani (IIT Delhi & DIMACS) A Primal-dual RNC Approximation Algorithm for (multi)-Set (multi)-Cover and Covering Integer Programs ------------------------- Friday, November 19 ========================= 8:00 - 8:45 Light Breakfast ------------------------- 8:50 - 9:20 Mike Goodrich (Johns Hopkins) Parallel Methods for Computational Geometry 9:20 - 9:50 Yossi Matias (Bell Labs) Highly Parallel Randomized Algorithms - Some Recent Results 9:50 - 10:10 Dina Kravets (NJIT) All Nearest Smaller Values on Hypercube with Applications ------------------------- 10:10-10:40 Coffee Break ------------------------- 10:40 - 11:10 Mike Atallah (Purdue) Optimal Parallel Hypercube Algorithms for Polygon Problems 11:10 - 11:40 Ernst Mayr (Munich) Optimal Tree Contraction on the Hypercube and Related Network 11:40 - 12:00 David Haglin (Mankato State U) Evaluating Parallel Approximation Algorithms: With a Case Study in Graph Matching 12:00 - 12:20 Jesper Traff (Copenhagen) A Distributed Implementation of an Algorithm for the Maximum Flow Problem ------------------------- 12:20 - 1:50 LUNCH ------------------------- 1:50 - 2:20 Joseph JaJa (Maryland) Efficient Parallel Algorithms for Image Processing 2:20 - 2:50 Rainer Feldmann (Paderborn) Game Tree Search on Massively Parallel Systems 2:50 - 3:10 Stefan Tschoke (Paderborn) Efficient Parallelization of a Branch & Bound Algorithm for the Symmetric Traveling Salesman Problem 3:10 - 3:30 Erik Tarnvik (Umea, Sweden) Solving the 0-1 Knapsack Problem on a Distributed Memory Multicomputer ------------------------- 3:30 - 4:00 Coffee Break ------------------------- 4:00 - 4:30 Aravind Srinivasan (Institute for Advanced Study and DIMACS) Improved parallel algorithms via Approximating Probability Distributions 4:30 - 4:50 Per Laursen (Copenhagen) Parallel Simulated Annealing Using Selection and Migration -- an Approach Inspired by Genetic Algorithms 4:50 - 5:20 Zvi Galil (Columbia U & Tel Aviv) From the CRCW-PRAM to the HCUBE via the CREW-PRAM and the EREW-PRAM or In the Defense of the PRAM ------------------------- TRAVEL AND HOTEL INFORMATION: It is recommended that participants arriving by plane fly into Newark Airport. Flying into Kennedy or La Guardia can add more than an hour to the travel time to DIMACS. DIMACS has successfully and quite pleasantly used the Comfort Inn and the Holiday Inn, both in South Plainfield - they are next to each other. The Comfort Inn gives DIMACS the rate of $40.00 and the Holiday Inn of $60.00 (includes a continental breakfast). The Comfort Inn's # is 908-561-4488. The Holiday Inn's # is 908-753-5500. They both provide free van service to and from DIMACS. If desired, hotel reservations can be made by Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930], the workshop coordinator. She will need to know the date of arrival and departure, which hotel is preferred, and a credit card and expiration number. To travel between Newark Airport and DIMACS/hotels, we recommend ICS Van Service, 1-800-225-4427. The rate is $19.00 per person. It must be booked in advance. From the New York airports, participants may take the Grayline Air (bus) Shuttle (1-800-451-0455) to Newark and then ICS Van service from there. Participants arriving to DIMACS by car will need a parking permit. Parking permits can be obtained in advance by sending email to Pat Toci. Otherwise, they can be obtained any day of the workshop. All workshop events will take place at DIMACS, located in the CoRE Building of Rutgers University, Busch Campus, in Piscataway, New Jersey. For further questions regarding local transportation and accommodations, or to obtain detailed driving directions to the hotels and to DIMACS, contact Pat Toci [toci@dimacs.rutgers.edu, (908) 932-5930]. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: leichter@thorium.rutgers.edu Subject: Re: Linda for ALPHA(AXP)/OPENvms References: <1993Nov8.202248.1189@hubcap.clemson.edu> In article <1993Nov8.202248.1189@hubcap.clemson.edu>, sabu@panix.com (Sabu Bhatia) writes: | Hi, | | | Does anyone know if "Linda" has been ported to ALPHA(AXP)/OPNvms. If it | has been an ftp site or vendor name/phone number shall be greatly | appreciated. LRW Systems has an implementation of Linda for VMS. Currently, it runs on VAXes, but an Alpha/VMS port would be quite straightforward, given customer interest. I'd rather not go into details from my Rutgers account. Send mail to me as leichter@lrw.com (or lrw@lrw.com if you forget the spelling - it'll reach me) for further information. -- Jerry Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dxn@ivy.WPI.EDU (Deepak Narain) Subject: Re: Searching Parallel Debugging References Organization: Department of Computer Science, Worcester Polytechnic Institute >From Alain.Fagot@imag.fr (Alain Fagot): > I'm currently desiging a degugging environment for the following >programming model : the computation is done by threads on several UNIX >processes (behaving like virtual processors), the cooperation between >threads is based on asynchronous RPC. > >The debugging environment will be based on (deterministic) execution replay >of parallel programs. I have already designed and implemented a mechanism >for recording and replaying our parallel programs. > >I need some pointers to previous work done for debugging of parallel >programs using a similar programming model. Try the bibliography in wilma.cs.brown.edu:/debug. It is very extensive, and up to date to about the beginning of the year. A question: What kind of replay do you provide for, is it for just key events? What model do you use for storing data about program execution? I hope that these questions are not TOO general in nature. If you could point me to some papers, that would be fine too. ------------------------------------------------------------------------ Deepak Narain dxn@cs.wpi.edu Department of Computer Science Worcester Polytechnic Institute Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tim@osiris.usi.utah.edu (Timothy Burns) Subject: C++ classes for distributed parallel computing Sender: news@csc-sun.math.utah.edu Organization: Utah Supercomputing Institute, University of Utah Hello, I am looking for C++ classes and functions for distributed parallel computing. My primary interest is a package that will attach to a variety of message-passing platforms. Thanks, Tim -- Tim Burns email: tim@osiris.usi.utah.edu USI, 85 SSB, Univ. of Utah, UT 84112 phone: (801)581-5172 +--------------------------------------------------------------------------+ | Even the most brilliant scientific discoveries will in time change and | | perhaps grow obsolete, as new scientific manifestations emerge. But Art | | is eternal; for it reveals the inner landscape which is the soul of man. | +---------------------------------- --Martha Graham ---------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edsr!jlb@uunet.UU.NET (Jeff Buchmiller) Subject: Re: The Future of Parallel Computing Reply-To: edsr!jlb@uunet.UU.NET Organization: Electronic Data Systems In <1993Nov4.163540.2008@hubcap.clemson.edu> elm@cs.berkeley.edu (ethan miller) writes: > The goal of compiling for parallel code should NOT >necessarily be "the best possible code;" it should be "reasonably >close to the best possible code." PLEASE don't forget about portability. If the same code can be compiled onto multiple architectures, it will make the programmer's job MUCH MUCH easier. (Tune to architecture as needed, instead of rewrite from scratch.) --jeff -- Jeff Buchmiller Electronic Data Systems R&D Dallas, TX jlb@edsr.eds.com ----------------------------------------------------------------------------- Disclaimer: This E-mail/article is not an official business record of EDS. Any opinions expressed do not necessarily represent those of EDS. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bodhi@cc.gatech.edu () Subject: MP OS Survey available in anon. ftp site Organization: College of Computing, Georgia Tech Date: Tue, 9 Nov 1993 00:27:56 GMT Apparently-To: comp-parallel@gatech.edu Hi, Finally, I have been able to put a draft of the multiprocessor operating system survey (which I have been promising for a while) in the ftp site of our school. You can get a copy by anonymous ftp from ftp.cc.gatech.edu. The paper is in /pub/bodhi/ossurvey.ps.Z. Since, there were so many mail asking for a copy of the paper, I just spent some time updating the survey. That was the main reason why I took one week instead of a couple of days (:-)). I am sending this draft of the paper to ACM Computing Surveys. I am sure that I have missed a lot of the important (and interesting) work. So, I will really like to hear your opinion about (1) the structure of the survey (2) technical contents (3) what more should I include (4) what should I get rid of I will try to include your suggestions in the final draft (if it gets accepted). I will really appreciate any input/criticism to make it a better paper. I am planning to send the file by e-mail to all of you who has sent me mail. However since the file is rather large (70+4 pages), some mailers might create problems .. So, please let me know if you haven't received the paper in next 2 days (and you do *not* have ftp access). I will make some alternative arrangements. *NOTE*: If you haven't sent me a mail yet, and you will like to ftp the paper, and if you want to get futures updates on the survey please e-mail me a note .. Here is the abstract of the paper once again: ----------------------------------------------------------------------------- Title: A Survey of Multiprocessor Operating System Kernels ABSTRACT: Multiprocessors have been accepted as vehicles for improved computing speeds, cost/performance, and enhanced reliability or availability. However, the added performance requirements of user programs and functional capabilities of parallel hardware introduce new challenges to operating system design and implementation. This paper reviews research and commercial developments in multiprocessor operating system kernels from the late 1970's to the early 1990's. The paper first discusses some common operating system structuring techniques and examines the advantages and disadvantages of using each technique. It then identifies some of the major design goals and key issues in multiprocessor operating systems. Issues and solution approaches are illustrated by review of a variety of research or commercial multiprocessor operating system kernels. ------------------------------------------------------------------------- *Real Time Survey*: A colleague of mine is currently working on the paper. He will post a news in this group as soon as it is ready. Cheers, ----------------------------------------------------------------- MUKHERJEE,BODHISATTWA Georgia Institute of Technology, Atlanta Georgia, 30332 uucp: ...!{allegra,amd,hplabs,seismo,ut-ngp}!gatech!prism!gt3165c ARPA: bodhi@cc.gatech.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: David Kanecki Subject: Emergency Management/Planning Newsletter ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ E M E R G E N C Y M A N A G E M E N T / P L A N N I N G N E W S L E T T E R ------------------------------------------------------------------------------ Volume 1, Issue 3, November 8, 1993 Moderator: David H. Kanecki, A.C.S., Bio. Sci. ============================================================================== To obtain the newsletter or send information, please contact: kanecki@cs.uwp.edu or David H. Kanecki, A.C.S., Bio. Sci. P.O. Box 26944 Wauwatosa, WI 53226-0944 Contents of this Issue ----------------------- 1. Papers received for Simulation for Emergency Management Conference 2. New Products 3. Thoughts on Emergency Management - Goals ============================================================================== 1. PAPERS RECEIVED for Simulation for Emergency Management Conference, April 11-14, 1994, San Diego. By David H. Kanecki, A.C.S., Bio. Sci., Associate Vice President - Emergency Management Technical Activity To show the quality of papers being presented to the Simulation for Emergency Management Conference, I would like to give a synopsis of some of the papers. In one paper, two persons describe a system to aid in building operation. The system allows person to simulate the air quality in a building when an emergency condition exist. Due to this simulation, the building can be modified to improve the air quality in emergen- cies. In another paper, 3 persons describe how a simulation can help improve the effectiveness of relief efforts. With this simulation, the authors have been able to identify the time, place, and conditions. Thus, the simulations can be used to make recommendations. In yet another paper, 1 person describes a method on how to make emergency management simulations more accurate. With this technique, the simulation can better model the actual environment. In these descriptions, there are 6 persons involved in three differ- ent areas of Simulation for Emergency Management. The first paper concentrated on a resource. The second paper mentioned concentrated on a area. Lastly, the third paper mentioned concentrated on a system. Therefore, simulation for Emergency Management is a big field and needs everyone who can help. If you would like to submit a paper or attend the April 11-14, 1994 conference in San Diego please write to: Society for Computer Simulation c/o Emergency Management, SMC '94 P.O. Box 17900 San Diego, CA 92177 Papers can be submitted until December 20th. 2. NEW PRODUCTS By David H. Kanecki, A.C.S., Bio. Sci. Please note -- I am relaying information that I have recived on a professional basis. I am not affiliated with the companies mentioned. Recently, I have received a flyer from Powersim(tm) about its dynamic modeling and simulation package that is available for 386 with MS-Windows. The program is designed to work in management, industry, ecology/environment, science, and education and training. To find out more about the program contact: Powersim AS, P.O. Box 42, N-4871 Fevik, Norway or e-mail: josgo@modeld.no Also, I have received from Elsevier publishing, an announcement for a new journal. The journal is entitled "Computer Science" which covers 10 topical areas and 15 fields of endeavor. Finally, Integrated Systems send me a flyer on the family of program available through its Matrix X product. The three families are matrix math for design and analysis, systemBuild for modelling and simulation, and AutoCode/AC-100 for code generation and rapid proto- typing. 3. THOUGHTS ON EMERGENCY MANAGEMENT - GOALS By David H. Kanecki, A.C.S., Bio. Sci. In emergency management, it is important to have a goal. A goal acts a marker and guidepost for someone to navigate to. Thus, no matter what type of navigation is used, the goal is there for them to go to. Based on the Kanecki family writing, a goal was described as this: "Health, Vitality, Egotism, Isolation, Education, Experience, Expertise, Open Mindedness, Skill, Honest, Awareness, Quality, Integ- rity, Leadership. 'A person who can correlate key details of all into goals, objectives, and plans, utilizing available or new resources, using good system management understanding, logic, and decisions will succeed. Remember, goals also need the involvement of people working together.'" w Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ukeller@sun1.pallas-gmbh.de (Udo Keller) Subject: PARMACS V6.0 available for Workstations Organization: PALLAS GmbH Bruehl, Germany, Nov 9, 1993: PALLAS announces product availability for the workstation version of PARMACS V6.0. PARMACS is a machine-independent message-passing interface for parallel computers and workstation networks that has become the European de-facto standard in the field of industrial parallel applications. PARMACS has been chosen not only in most industrial software developments, but also in Esprit projects, for the production codes in the RAPS consortium, and in major European benchmark activities. PARMACS V6.0 comes along with a library interface for FORTRAN and C and is fully backwards compatible to PARMACS V5.1. PARMACS V6.0 supports load balancing, heterogenous networks, and is available for DEC Alpha and MIPS based systems, HP 9000 model 700 series, IBM RS/6000 series, SGI Iris series, SUN SPARCstation series. Special subscription conditions both for upgrade and new users are obtainable until Dec 31, 1993. Contact PALLAS for details. -- ---------------------------------------------------------------------------- Udo Keller phone : +49-2232-1896-0 PALLAS GmbH fax : +49-2232-1896-29 Hermuelheimer Str.10 direct line: +49-2232-1896-15 D-50321 Bruehl email : ukeller@pallas-gmbh.de ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: "Stephanie A. Stotland" Subject: Heterogeneous Networks Organization: University of Virginia I am looking for information on how to create a distributed computing environment. I would like to develop high speed networks, coupled with workstations and high performance computers. I am familar with parallel processing on a high performance computer, but would like to extend this to heterogenous networks. Is there existing software out there? Are there references that could get me started? Please respond by email. Thank you, Stephanie Stotlandd sas6r@virginia.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stgprao@st.unocal.COM (Richard Ottolini) Subject: Re: Tuple space, What is it ???? Organization: Unocal Corporation References: <1993Nov5.133422.9205@hubcap.clemson.edu> <1993Nov8.213735.10734@hubcap.clemson.edu> Tuples are also used in database terminology- two or more fields associated together. There is a formal algebra of operations transforming tuples when normalizing or joining databases. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: adamg@compnews.co.uk (Adam Greenwood) Subject: The Future of Parallel Computing Organization: Computer Newspaper Services, Howden, UK. > We are now in an age when the high performance machines have >various data network topologies, i.e. meshes, torii, linear arrays, >vector processors, hypercubes, fat-trees, switching networks, etc.. >etc.. These parallel machines might all have sexy architectures, but >we are headed in the wrong direction if we don't take a step back and >look at the future of our work. We shouldn't have to rewrite our >algorithms from scratch each time our vendor sells us the latest >hardware with amazing benchmarks. Benchmarks should also be >attainable from STANDARD compiler options. We should NOT have to >streamline routines in assembly language, give data layout directives, >nor understand the complexities of the hardware and/or data network. > > Please let me know what you think, I think the time has come for software engineering to catch up with hardware engineering in the parallel computing world. Parallel computing is at an important point right now. It's been proven in the scientific world, but not the commercial world, and I don't think the commercial world puts up with the lack of support for it's harwdare that researchers do. After all, isn't that half the fun of research? Making this new box do some clever tricks? :). A few replies to this thread have taken the line that scientific computing is quite happy to have to 'streamline code', or in other words hack routines in assembly language. As a software engineer who moved into parallel computing, I have some fundamental problems with this, but I am prepared to let them pass and ask one question: Is scientific computing the only future for Parallel Processing? That's the subject of this thread, remember, the _future_ of Parallelism. Whether you need a few low-level hacks for you scientific research right now has little bearing on whether, in the future, other potential users of parallel systems will think the same. I think there's even a chance that the suitability of parallel processors to scientific and engineering applications is damaging the chances of the commercial world ever benefiting from the potential. This is all IMHO, of course, and I might be over-reacting, but I do think more can be made of the potential of Parallel Systems if a few attitudes were changed a bit. This wasn't meant to be a flame against scientic use if parallel computing... without that, there wouldn't be any. :) Adam -- O=========================================O======================O | Adam Greenwood, ISPS, CNS | | | email adamg@compnews.co.uk | My opinions don't | | phone (0430) 432480 x27 | reflect anything, | O=========================================O and that includes | | "Good morning Mr Tyler, going... down?" | the opinions of | | - Aerosmith - | anyone else. | | 'Love in an Elevator' | | O=========================================O======================O Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: SC'93 BOF Organization: Cornell Theory Center Computer, Computational and Discipline Scientists NSF and NASA have begun a joint project to evaluate the effectiveness of Scalable Parallel Computing systems (SPC) under credible scientific workloads on a wide variety of SPC's. We want to stress that the goal is not benchmarking, but rather evaluation and understanding of the current and emerging SPCs that exist at both agencies, and to provide information to the large user community that both agencies support, as well as to the vendors of the SPC systems. The project name is the Joint NSF/NASA Initiative in Evaluation (called JNNIE and pronounced genie) and will be described at a Birds of a Feather session (BOF) at Supercomputing 93, Wednesday, November 17 at 2PM in room A107. The BOF will give an overview of JNNIE, provide early case studies and anecdotal information on machine usability that have been collected to date. We will also describe the planned phase II approach. The purpose of the session is to inform the community at large of the JNNIE enterprise, elicit feedback, comment and possible refinement of the overall plan. We expect the JNNIE initiative to influence the evolution of HPC system hardware and software, algorithms, and applications to better serve the needs of the computational science community. We invite you to attend the BOF to learn about the project and to give us the benefit of your insight and opinions of it. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ben@carie.mcs.mu.edu (Benjamin J. Black) Subject: Re: The Future of Parallel Computing Date: 9 Nov 1993 17:33:07 GMT Organization: Marquette University - Dept. Math, Statistics, & Comp. Sci. References: <1993Nov9.163338.9165@hubcap.clemson.edu> Nntp-Posting-Host: carie.mcs.mu.edu In article <1993Nov9.163338.9165@hubcap.clemson.edu> edsr!jlb@uunet.UU.NET (Jeff Buchmiller) writes: > In <1993Nov4.163540.2008@hubcap.clemson.edu> elm@cs.berkeley.edu (ethan miller) writes: > > > The goal of compiling for parallel code should NOT > >necessarily be "the best possible code;" it should be "reasonably > >close to the best possible code." > > PLEASE don't forget about portability. If the same code can be compiled > onto multiple architectures, it will make the programmer's job MUCH MUCH > easier. (Tune to architecture as needed, instead of rewrite from scratch.) Anybody think the paralation model is a good solution to the portability vs. efficiency problem? Ben ben@carie.mcs.mu.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: elm@cs.berkeley.edu (ethan miller) Subject: Re: The Future of Parallel Computing Organization: Berkeley--Shaken, not Stirred References: <1993Nov9.163338.9165@hubcap.clemson.edu> Reply-To: elm@cs.berkeley.edu >>>>> "Jeff" == Jeff Buchmiller writes: elm> The goal of compiling for parallel code should NOT necessarily be elm> "the best possible code;" it should be "reasonably close to the elm> best possible code." Jeff> PLEASE don't forget about portability. If the same code can be Jeff> compiled onto multiple architectures, it will make the Jeff> programmer's job MUCH MUCH easier. (Tune to architecture as Jeff> needed, instead of rewrite from scratch.) I agree 100%. Look at how serial programs work -- under Unix and derivatives, many programs don't need to be modified to compile for different architectures. This is especially true of user applications (as opposed to OS-type software). My hope is that, eventually, people will be able to switch from MPP A to MPP B merely by recompiling all of their code (perhaps with a few changes of #include files or predefined constants). In order for this to happen, though, the community must realize that a loss of a few percent of performance in exchange for portability and ease of coding and maintenance is acceptable for MPPs. ethan -- +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ ethan miller--cs grad student | "Why is it whatever we don't elm@cs.berkeley.edu | understand is called a 'thing'?" #include | -- "Bones" McCoy Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hbchen@cse.uta.edu (Hsing B Chen) Subject: Final CFP- 5th IEEE Sym. on Parallel and Distributed Processing Organization: Computer Science Engineering at the University of Texas at Arlington ===================================================================== Final Call for participation IEEE 5th SPDP, 1993 Dallas, Texas ===================================================================== FIFTH IEEE SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING Sponsors: IEEE-Computer Society and IEEE-CS-Dallas Chapter Omni Mandalay Hotel, Irving, Texas - December 1-4, 1993 This symposium provides a forum for the presentation and exchange of current work on a wide variety of topics in parallel and distributed processing including: Computer Architecture Neural Networks Artificial Intelligence Simulation and Modeling Programming Languages Interconnection Networks Parallel Algorithms Distributed Computing Operating Systems Scheduling VLSI Systems Design Parallel Applications Database and Knowledge-base Systems The technical program will be held on December 1-3, 1993 and the tutorials will be held on December 4, 1993. Tutorials: Full-day Tutorials (December 4: 9:00 am - 5:30 pm): T1: Roles of Optics in Parallel Computing and High-Speed Communications, Ahmed Louri, Univ. of Arizona. T2: Functional Programming, Patrick Miller and John Feo, Lawrence Livermore National Laboratory. Half-day Tutorials (December 4): T3: (9:00 am - 12:30 pm): Instruction Scheduling, Barbara Simons and Vivek Sarkar, IBM Corp. T4: (2:00 pm - 5:30 pm): Software Systems and Tools for Distributed Programming, Anand Tripathi, University of Minnesota. Hotel Reservations: Please place your reservations directly with Omni Mandalay Hotel at Las Colinas, 221 East Las Colinas Blvd., Irving, Texas 75039, Tel: (214) 556-0800, Or (800) 843-6664. You must mention that you are attending SPDP in order to receive the special symposium rate of $94/night for a single or a double room. Please check with the reservations desk for the applicability of other special rates, such as those available to AAA members. Reservations should be made before November 16, 1993. After this date, reservations are subject to space availability. Directions: Omni Mandalay Hotel, the conference site, is located in the Las Colinas development area in the city of Irving (a suburb of Dallas). The hotel is about 10 minutes from the Dallas/Fort Worth (DFW) International Airport. By Car: Take DFW Int'l Airport north exit. Take Highway 114 East towards Dallas, go approximately 8 miles to OConnor Road Exit. Turn left, go two blocks, turn right on Las Colinas Blvd. The hotel will be 200 yards on the left. Shuttle Service: Super Shuttle provides van service from DFW Int'l Airport to the Omni Mandalay for $8.50 per person each way. For more information and reservations, call 817-329-2002. Weather: Dallas weather in early December ranges from low 40's to high 60's Fahrenheit. ++++++++++++++++++++++++++++++++ N City Irving and City Las Colinas W+E ++++++++++++++++++++++++++++++++ S O ----------------------O-------------- | (IRVING City) O | ================Highway 114========= | | __O_____ \ | Dallas City | D/FW | O | \ | | Airport | O (*)| \ | | | O LC| \ | | --O----- _ \| | O (B) \ | O - |\ ===========================O================Highway 183== | O | \ | O | \ | O | \ | O | | O | ------------------------------------ O Legend- LC: City Las Colinas *: Omni Mandalay Hotel (SPDP location) B: Dallas Cowboys Football Stadium (Texas Stadium) O: O'Connor Rd. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PLEASE SEND REGISTRATION FORM AND PAYMENT (payable to SPDP) TO: Dr. Behrooz Shirazi, University of Texas at Arlington, Dept. of Computer Science & Engineering, 416 Yates, Room 300, Arlington, TX 76019, Tel: (817) 273-3605, Fax: (817)273-3784, E-mail: shirazi@cse.uta.edu. The symposium registration includes the symposium proceedings, banquet, and luncheon. Student registration does not include the symposium proceedings or the luncheon. (Advance Registration: Before 11/19/93.) IEEE Members Non-Members Students Advance Registration: Symposium: US$260 US$325 US$100 Full-day Tutorial: US$220 US$275 US$220 Half-day Tutorial: US$110 US$140 US$110 On-site Registration Symposium: US$310 US$385 US$120 Full-day Tutorial: US$265 US$330 US$265 Half-day Tutorial: US$135 US$165 US$135 IEEE Member:____ Non-Member:____ Student:____ IEEE/Student No.:_______________________ Symposium: $_______ Tutorial: $_______, Specify choice of tutorial(s): T____ Total: $_______ _____Check enclosed, USA BANKS ONLY (payable to SPDP) ____Credit Card (VISA or Master Card ONLY) VISA___ or Master Card___ Credit Card No.:____________________________________ Expiration Date:____________ Signature:_____________________________ Last Name:_____________________________ First Name:________________________ Middle Initial:______ Organization: __________________________________________ Address: _____________________________________________ _______________________________________________ City, State,Zip/Country:__________________________________ Phone: ___________________, Fax: ____________________ E-mail:_______________________________ ======================================================================= Technical Program Fifth IEEE Symposium on Parallel and Distributed Processing Sponsors: IEEE-Computer Society and IEEE-CS-Dallas Chapter Omni Mandalay Hotel, Irving, Texas December 1-4, 1993 Note: L: 30-minute Presentation S: 15-minute Presentation Wednesday, December 1, 1993 8:30-9:30 am: On-site Registration - Conference material available at the Registration Desk 9:30-10:00 am: Opening and Awards Session (Mandalay West) 10:00-10:30 am: Break 10:30-12:00 noon Session A-1: Applications & Experimental Results I (Mandalay West) Chair: Simin Pakzad (Penn State University) L: Total Exchange on a Reconfigurable Parallel Architecture - by Yuh-Dauh Lyuu, Eugen Schenfeld L: Experimental Evaluation of Performance and Scalability of a Multiprogrammed Shared-Memory Multiprocessor - by Chitra Natarajan, Ravi Iyer S: Parallel Bidirectional A* Search on a Symmetry Multiprocessor - by Andrew Sohn S: Characterizing Execution Behavior of Application Programs on Network-based Shared-memory Mutiprocessors - by Xiaodong Zhang, Keqiang He, Elisa W. Chan Session B-1: Architecture I (Martaban) Chair: S. Lakshmivarahan (Univ. of Oklahoma) L: The Meerkat Multicomputer - by Robert Bedichek, Curtis Brown L: Correctness of a Directory-Based Cache Coherence Protocol: Early Experience - by Fong Pong, Michel Dubois S: Cache Design for an Explicit Token Store Data Flow Architecture - by P. Shanmugam, Shirish Andhare, Krishna Kavi, Behrooz Shirazi S: Architectural Support for Block Transfers in a Shared Memory Multiprocessor - by Steven J. E. Wilton, Zvonko G. Vranesic Session C-1: Wormhole Routing (Rangoon) Chair: Robert Cypher (IBM Almaden) L: Universal Wormhole Routing - by R. Greenberg and H. C. Oh L: A New Theory of Deadlock-free Adaptive Multicast Routing in Wormhole Networks - by J. Duato L: Adaptive Wormhole Routing in Hypercube Multicomputers - by X. Lin, A-H. Esfahanian, P.K. McKinley, A. Burago 12:00 - 1:30 pm: LUNCH 1:30 - 3:00 pm Session A-2: Storage Management (Mandalay West) Chair: Margaret Eich (SMU) L: Parallel Dynamic Storage Allocation Algorithms - by Arun Iyengar L: Storage Schemes for Parallel Memory Systems: An Approach Based on Circulant Matrices - by Cengiz Erbas, Murat M. Tanik, V. S. S. Nair S: Parallel Garbage Collection and Graph Reducer - by Wen-Yan Kuo, Sy-Yen Kuo Session B-2: Multithreading (Martaban) Chair: Krishna Kavi (NSF) L: An Evaluation of Software Multithreading in a Conventional Distributed Memory Multiprocessor - by Matthew Haines, Wim Bohm L: Analysis of Multithreaded Multiprocessors with Distributed Shared Memory - by Shashank S. Nemawarkar, R. Govindarajan, Guang R. Gao, Vinod K. Agarwal S: RICA: Reduced Interprocessor Communication Architecture - by Shuichi Sakai, Y. Kodama, M. Sato, A. Shaw, et al. Session C-2: Applications I (Rangoon) Chair: Phil Gibbons (ATT Bell Laboratories) L: Solving Markov Chains Using Bounded Aggregation on a Massively Parallel Processor - by R.B. Mattingly L: Direct and Iterative Parallel Methods for Boundary Value Problems - by I. Gladwell, G.Kraut L: Efficient Parallel Sibling Finding for Quadtree Data Structure - by D. Doctor and I. Sudborough 3:00 - 3:30 pm: BREAK 3:30 - 5:00 pm Session A-3: Interconnection Networks/ Routing I (Mandalay West) Chair: Nian-Feng Tzeng (USL) L: Analysis of Interconnection Networks Based on Simple Cayley Coset Graphs - by Jen-Peng Huang, S. Lakshmivarahan, S. K. Dhall L: An Efficient Routing Scheme for Scalable Hierarchical Networks - by Hyunmin Park, Dharma P. Agrawal S: Performance Evaluation of Idealized Adaptive Routing on k-ary n-cubes - by A. Lagman, W. A. Najjar, S. Sur, P. Srimani S: The B&E Model for Adoptable Wormhole Routing - by Xiaowei Shen, Y. S. Cheung Session B-3: Performance Evaluation I (Martaban) Chair: Diane Cook (UT-Arlington) L: Comparative Performance Analysis and Evaluation of Hot Spots on MIN-Based and HR-Based Shared-Memory Architectures - by Xiaodong Zhang, Yong Yan, Robert Castaneda L: Application of Parallel Disks for Efficient Handling of Object- Oriented Databases - by Y. C. Chehadeh, A. R. Hurson, L. L. Miller, B. N. Jamoussi S: The Parallel State Processor Model - by I. Gottlieb, L. Biran Session C-3: Geometric Algorithms (Rangoon) Chair: Cynthia Phillips (Sandia Nat'l Lab) L: Parallel Algorithms for Geometric Problems on Networks of Processors - by J. Tsay L: Optimal Parallel Hypercube Algorithms for Polygon Problems - by M. Atallah, D. Chen L: A Parallel Euclidean Distance Transformation Algorithm - by H. Embrechts, D. Roose 5:00 - 6:00 pm: BREAK 6:00 - 9:00 pm: CONFERENCE RECEPTION (Hors d'oeuvres and Cash Bar) Thursday, December 2, 1993 8:30 - 10:00 am Session A-4: Distributed Systems I (Mandalay West) Chair: Ray Liuzzi (Air Force Rome Labs) L: An Efficient and Reliable Multicast Algorithm - by Rosario Aiello, Elena Pagani, Gian Paolo Rossi L: An Efficient Load Balancing Algorithm in Distributed Computing Systems - by Jea-Cheoul Ryou, Jie-Yong Juang S: Assertions about Past and Future: Communication in a High Performance Distributed System Highways- by Mohan Ahuja S: Protocol Refinement for Maintaining Replicated Data in Distributed Systems - by D. Shou, Sheng-De Wang Session B-4: Performance Evaluation II (Martaban) Chair: Hee Yong Youn (UT-Arlington) L: A Methodology for the Performance Prediction of Massively Parallel Applications - by Daniel Menasce, Sam H. Noh, Satish K. Tripathi L: Determining External Contention Delay Due to Job Interactions in a 2-D Mesh Wormhole Routed Multicomputer - by Dugki Min, Matt W. Mutka L: Simulated Behaviour of Large Scale SCI Rings and Tori - by H. Cha, R. Daniel Jr. Session C-4: Mesh Computations (Rangoon) Chair: Abhiram Ranade (UC- Berkeley) L: Becoming a Better Host Through Origami: a Mesh is More Than Rows and Columns - by D. Greenberg, J. Park, E. Schwabe L: Deterministic Permutation Routing on Meshes - by B. Chlebus, M. Kaufmann, J. Sibeyn S: Dilation-5 Embedding of 3-Dimensional Grids into Hypercubes - by M. Chan, F. Chin, C. N. Chu, W. K. Mak 10:00 - 10:30 am: BREAK 10:30 - 12:00 noon Session A-5: Applications and Experimental Results II (Mandalay West) Chair: Doug Matzke (Texas Instruments) L: Scalable Duplicate Pruning Strategies for Parallel A* Graph Search - by Nihar R. Mahapatra, Shantanu Dutt L: A Parallel Implementation of a Hidden Markov Model with Duration Modeling for Speech Recognition - by C.D. Mitchell, R.A. Helzerman, L.H. Jamieson, M.P. Harper S: Performance Comparison of the CM-5 and Intel Touchstone Delta for Data Parallel Operations - by Zeki Bozkus, Sanjay Ranka, Geoffrey Fox, Alok Choudhary Session B-5: Interconnection Networks/Routing II (Martaban) Chair: Laxmi Bhuyan (Texas A&M Univ.) L: Analysis of Link Traffic in Incomplete Hypercubes - by Nian- Feng Tzeng, Harish Kumar L: Multicast Bitonic Network - by Majed Z. Al-Hajery, Kenneth E. Batcher L: Valved Routing: Implementing Traffic Control in Misrouting on Interconnection Network - by Wei-Kuo Liao, Chung-Ta King Session C-5: Message-Passing Systems (Rangoon) Chair: Sandeep Bhatt (Bellcore) L: Computing Global Combine Operations in the Multi-Port Postal Model - by A. Bar-Noy, J. Bruck, C.T. Ho, S. Kipnis, B. Schieber S: Broadcasting Multiple Messages in Simultaneous Send/Receive Systems - by A. Bar-Noy, S. Kipnis S: Fault Tolerant Broadcasting in SIMD Hypercubes - by Y. Chang S: Notes on Maekawa's O(sqrt N) Distributed Mutual Exclusion Algorithm - by Ye-In Chang 12:00 - 2:00 pm: CONFERENCE LUNCHEON and KEYNOTE SPEECH (Salon D) Stephen L. Squires (Advanced Research Projects Agency) High Performance Computing and National Scale Information Enterprises 2:00 - 3:30 pm Session A-6: Partitioning and Mapping I (Mandalay West) Chair: Jeff Marquis (E-Systems) L: Partitioning and Mapping a Class of Parallel Multiprocessor Simulation Models - by H. Sellami, S. Yalamanchili L: An Efficient Mapping of Feed-Forward with Back Propagation ANNs on Hypercubes - by Q. M. Malluhi, M. A. Bayoumi, T. R. N. Rao S: Data Partitioning for Networked Parallel Processing - by Phyllis E. Crandall, Michael J. Quinn Session B-6: Architecture II (Martaban) Chair: Dharma P. Agrawal (North Carolina State University) L: Designing a Coprocessor for Recurrent Computations - by K. Ganapathy, B. Wah L: Analysis of Control Parallelism in SIMD Instruction Streams - by J. Allen, V. Garg, D. E. Schimmel L: Representation of Coherency Classes for Parallel Systems - by J. A. Keane, W. Hussak Session C-6: Applications II (Rangoon) Chair: Hal Sudborough (UT-Dallas) L: A Parallel Lattice Basis Reduction for Mesh-Connected Processor Arrays and Parallel Complexity - by Ch. Heckler, L. Thiele L: Parallel Network Dual Simplex Method on a Shared Memory Multiprocessor - by K. Thulasiraman, R.P. Chalasani, M.A. Comeau S: Parallel Simulated Annealing by Generalized Speculative Computation - by Andrew Sohn, Zhihong Wu, Xue Jin 3:30 - 4:00 pm: BREAK 4:00 - 5:30 pm Session A-7: Languages I (Mandalay West) Chair: Benjamin Wah (University of Illinois at Urbana-Champaign) L: On the Granularity of Events when Modeling Program Executions - by Eric Leu, Andre Schiper L: Cloning ADT Modules to Increase Parallelism: Rationale and Techniques - by Lonnie R. Welch L: The Design and Implementation of Late Binding in a Distributed Programming Language - by Wenwey Hseush, Gail E. Kaiser Session B-7: Reliability and Fault-Tolerance I (Martaban) Chair: A. Waksman (Air Force) L: Measures of Importance and Symmetry in Distributed Systems - by Mitchell L. Neilsen S: Dependability Analysis for Large Systems: A Hierarchical Modeling Approach - by Teresa A. Dahlberg, Dharma P. Agrawal L: An Adaptive System-Level Diagnosis Approach for Hypercube Multiprocessors - by C. Feng, L. N. Bhuyan, F. Lombardi Session C-7: Distributed Algorithms (Rangoon) Chair: Ioannis Tollis (UT-Dallas) L: How to Share a Bounded Object: A Fast Timing-Based Solution - by R. Alur, G. Taubenfeld L: Using Induction to Prove Properties of Distributed Programs - by V. Garg and A. Tomlinson S: An Optimal Distributed Ear Decomposition Algorithm with Applications to Biconnectivity and Outer Planarity Testing - by A. Kazmierczak, S. Radhakrishnan S: Group Membership in a Synchronous Distributed System - by G. Alari, A. Ciuffoletti Friday, December 3, 1993 8:30 - 10:00 am Session A-8: Compilation (Mandalay West) Chair: Paraskevas Evripidou (SMU) L: Compiling Distributed C++ - by Harold Carr, Robert Kessler, Mark Swanson L: ALIAS Environment: A Compiler for Application Specific Arrays - by James J. Liu, Milos D. Ercegovac L: An Algorithm to Automate Non-Unimodular Transformations of Loop Nests - by Jingling Xue Session B-8: Languages II (Martaban) Chair: Les Miller (Iowa State University) L: Genie: An Environment for Partitioning Mapping in Embedded Multiprocessors - by S. Yalamanchili, L. Te Winkel, D. Perschbacher, B. Shenoy L: Analysis of Affine Communication Specifications - by S. Rajopadhye L: C-Linda Implementation of Distinct Element Model - by Siong K. Tang, Richard Zurawski Session C-8: Fault-Tolerant Communication (Rangoon) Chair: Yanjun Zhang (SMU) L: Multicasting in Injured Hypercubes Using Limited Global Information - by J. Wu, K. Yao L: Fault-Tolerance Properties of deBruijn and Shuffle-Exchange Networks - by M. Baumslag L: Communication Complexity of Fault-Tolerant Information Diffusion - by L. Gargano, A. Rescigno 10:00 - 10:30 am: BREAK 10:30 - 12:00 noon Session A-9: Interconnection Networks/Routing III (Mandalay West) Chair: Dhiraj K. Pradhan (Texas A&M) L: Exact Solutions to Diameter and Routing Problems in PEC Networks - by C. S. Raghavendra, M. A. Sridhar L: Folded Peterson Cube Networks: New Competitors for the Hyper Cube - by Sabine Oehring, Sajal K. Das S: A Unified Structure for Recursive Delta Networks - by P. Navaneethan, L. Jenkins S: Recursive Diagonal Torus: An Interconnection Network for Massively Parallel Computers - by Yulu Yang, H. Amano, H. Shibamura, T. Sueyoshi Session B-9: Potpourri (Martaban) Chair: Bill D. Carroll (UT-Arlington) L: A Processor Allocation Strategy Using Cube Coalescing in Hypercube Multicomputers - by Geunmo Kim, Hyusoo Yoon S: An Efficient Storage Protocol for Distributed Object Oriented Databases- by Min He, Les L. Miller, A. R. Hurson, D. Sheth S: Performance Effects of Synchronization in Parallel Processors - by Roger D. Chamberlain, Mark A. Franklin S: Compiling Distribution Directives in a FORTRAN 90D Compiler - by Z. Bozkus, A. Choudhary, G. Fox, T. Haupt, S. Ranka S: A Proposed Parallel Architecture for Exploring Potential Concurrence at Run-Time - by M. F. Chang, Y. K. Chan Session C-9: Parallel Algorithms (Rangoon) Chair: Farhad Shahrokhi (University of North Texas) L: Fast Rehashing in PRAM Emulations - by J. Keller L: On the Furthest-Distance-First Principle for Data Scattering with Set-Up Time - by Y-D. Lyuu L: Zero-One Sorting on the Mesh - by D. Krizanc and L. Narayanan 12:00 - 1:30 pm: LUNCH 1:30 - 3:00 pm Session A-10: Distributed Systems II (Mandalay West) Chair: Dan Moldovan (SMU) L: Incremental Garbage Collection for Causal Relationship Computation in Distributed Systems - by R. Medina S: STAR: A Fault-Tolerant System for Distributed Applications - by B. Folliot, P. Sens S: Flexible User-Definable Performance of Name Resolution Operation in Distributed File Systems - by Pradeep Kumar Sinha, Mamoru Maekawa S: A Layered Distributed Program Debugger - by Wanlei Zhou S: Distributed Algorithms on Edge Connectivity Problems - by Shi- Nine Yang, M.S. Cheng Session B-10: Partitioning and Mapping II (Martaban) Chair: Sajal Das (University of North Texas) L: Task Assignment on Distributed-Memory Systems with Adaptive Wormhole Routing - by V. Dixit-Radiya, D. Panda L: A Fast and Efficient Strategy for Submesh Allocation in Mesh- Connected Parallel Computers - by Debendra Das Sharma, Dhiraj K. Pradhan L: Scalable and Non-Intrusive Load Sharing in Distributed Heterogeneous Clusters - by Aaron J. Goldberg, Banu Ozden Session C-10: Network Communication (Rangoon) Chair: C.S. Raghavendra (WSU) L: Optimal Communication Algorithms on the Star Graph Interconnection Network - by S. Akl, P. Fragopoulou L: Embedding Between 2-D Meshes of the Same Size - by W. Liang, Q. Hu, X. Shen S: Optimal Information Dissemination in Star and Pancake Networks - by A. Ferreira, P. Berthome, S. Perennes 3:00 - 3:30 pm: BREAK 3:30 - 5:00 pm Session A-11: Applications and Experimental Results III (Mandalay West) Chair: Bertil Folliot (Universite Paris) L: Extended Distributed Genetic Algorithm for Channel Routing - by B. B. Prahalada Rao, R. C. Hansdah S: A Data-Parallel Approach to the Implementation of Weighted Medians Technique on Parallel/Super-computers - by K. P. Lam, Ed. Horne S: Matching Dissimilar Images: Model and Algorithm - by Zhang Tianxu, Lu Weixue S: Parallel Implementations of Exclusion Joins - by Chung-Dak Shum S: Point Visibility of a Simple Polygon on Reconfigurable Mesh - by Hong-Geun Kim and Yoo-Kun Cho Session B-11: Reliability and Fault-Tolerance II (Martaban) Chair: Ben Lee (Oregon State University) L: Adaptive Independent Checkpointing for Reducing Rollback Propagation - by Jian Xu, Robert H. B. Netzer L: Fast Polylog-Time Reconfiguration of Structurally Fault- Tolerant Multiprocessors - by Shantanu Dutt L: Real-Time Distributed Program Reliability Analysis - by Deng- Jyi Chen, Ming-Cheng Sheng, Maw Sheng Session C-11: Interconnection Networks/Routing IV (Rangoon) Chair: Sudha Yalamanchili (Georgia Tech.) L: Scalable Architectures with k-ary n-cube cluster-c organization - by Debashis Basak, Dhabaleswar Panda L: On Partially Dilated Multistage Interconnection Networks with Uniform Traffic and Nonuniform Traffic Spots - by M. Jurczyk, T. Schwederski S: Binary deBruijn Networks for Scalability and I/O Processing - by Barun K. Kar, Dhiraj K. Pradhan S: A Class of Hypercube-Like Networks - by Anirudha S. Vaidya, P. S. Nagendra Rao, S. Ravi Shankar Saturday, December 4, 1993 Tutorial T1: Roles of Optics in Parallel Computing and High-Speed Communications by Ahmed Louri - University of Arizona 8:30 am - 5:00 pm (Martaban) This tutorial will start by examining the state-of-the-art in parallel computing, including parallel processing paradigms, hardware, and software. We will then discuss the basic concepts of optics in computing and communications and the motivations for considering optics and ways in which optics might provide significant enhancements to the computing and communications technologies. The tutorial will include some case studies of optical computing and switching systems. Current research and future applications of optical computing are discussed. Tutorial T2: Functional Programming by Patrick Miller and John Feo - Lawrence Livermore National Laboratory 8:30 am - 5:00 pm (Rangoon) The objective of this tutorial is to familiarize the participants with the current state of functional languages. We will cover both theoretical and practical issues. We will explain the mathematical principals that form the foundation of functional languages, and from which they derive their advantages. We will survey a representative set of existing functional languages and different implementation strategies. We will use the functional language Sisal to expose the participants to the art of functional programming. Tutorial T3: Instruction Scheduling by Barbara Simons and Vivek Sarkar - IBM Corp. 8:30 - 12:00 noon (Nepal) In this tutorial we describe different models of deterministic scheduling, including pipeline scheduling, scheduling with inter- instructional latencies, scheduling VLIW machines, and assigned processor scheduling. In addition, we present an overview of important extensions to the basic block scheduling problem. The program dependence graph, annotated with weights, provides a good representation for global instruction scheduling beyond a basic block. Finally, we describe the close interaction between the problems of instruction scheduling and register allocation. Tutorial T4: Software Systems and Tools for Distributed Programming by Anand Tripathi - University of Minnesota 1:30-5:00 pm (Nepal) This tutorial will present an overview of the most commonly used paradigms and models for distributed computing. This discussion will address interprocess communication models and heterogeneous computing issues. An overview of the object model of computing will be presented in the context of micro-kernel architectures for distributed computing. The programming languages and tools to be discussed here include Parallel Virtual Machine (PVM), P4, Linda, Express, Mentat, Condor, CODE/ROPE, and Orca. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: usr@rice.edu (Usha Rajagopalan) Subject: Wanted: Some information on Interconnection Networks Reply-To: usr@rice.edu (Usha Rajagopalan) Organization: Rice University I am running some simulations and I need to use some "real" numbers for the interconnection network parameters. I need information such as 1. ROuting (vitual cutthough, wormhole etc.) 2. Switch clock speed 3. Switching Delay 4. Switch Buffer Size 5. Width of links 6. Port Buffer size for interconnection networks in machines such as Intel Hypercube, Intel Paragon, CM-5, J-Machine, BBN Butterfly. Information on any other commertial machine not mentioned above is also appriciated. But in that case please also mention the network type. Thanks for any help.I will summarize if there is interest. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: waander@cs.umd.edu (Bill Andersen) Subject: Re: AI and Parallel Machines Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 References: <1993Nov5.153315.2624@unocal.com> >>I am looking for references regrading the impact parallel processing has >>had on projects involving AI. Check out the 1993 AAAI Spring Symposium proceedings on Massive Parallelism and AI. Our research group (led by Jim Hendler) here at UMD is heavily involved in using MPP for AI. Most of our work has been done on TMC hardware. There are many other research groups working on MPP and AI. Notables (other than those doing neural networks and GAs) are: Matt Evett FAU (Florida Atlantic Univ.) James Geller NJIT (NJ Institute of Technology) Marc Goodman Brandeis Hiroaki Kitano NEC Corp. and CMU Dan Moldovan USC Lokendra Shastri UPenn (he may not be there now) David Waltz NEC in Princeton I'm sure there are others but these are the folks who spring immediately to mind. My apologies for any omissions. You might want to check out two upcoming books. One is by Hendler and Kitano from AAAI/MIT Press ("Massively Parallel Artificial Intelligence") and the other by Laveen Kanal from Elsevier. Both should be out shortly. Hope this gives you a start. Please feel free to email me if you have more questions. ...bill -- / Bill Andersen (waander@cs.umd.edu) / / University of Maryland / / Department of Computer Science / / College Park, Maryland 20742 / Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zxu@ringer.cs.utsa.edu (Zhichen Xu) Subject: Acknowledgement Organization: Univ of Texas at San Antonio I'd like to thank the following friends who have provided me with infomation of existing parallel computers, my regards to them are beyond words. Andy. Matthew D. Bennett B.J. Manuel Eduardo C. D. Correia hecht@oregon.cray.com Peter J.E. Stan Stephanie Stotland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Murray Cole Subject: The Future of Parallel Computing Organization: Department of Computer Science, University of Edinburgh There have been a number of posts in this thread which proceed along the lines that "parallel software needs to catch up with parallel hardware". I would like to suggest (for discussion at least!) that the reverse might be the case. We now have books full of snappy parallel algorithms, whether for abstract models such as PRAM or for specific networks. Similarly, it's not hard to devise notations in which these algorithms can be expressed and manipulated clearly and concisely. Unfortunately, most of them don't seem to run too well on real parallel machines, even when there is no mismatch in the architectural structure. Why should the blame lie with concise, efficient software which has illuminated the essential parallel structure of a problem? Doesn't the fault lie with hardware which can't satifactorily implement the simplest, clearest parallel algorithms with reasonable efficiency. For example, why shouldn't I expect my simple summation algorithm for n values on n hypercube processors to run a lot faster than a sequential summation on one of those processors? Perhaps I'm just using the wrong machines ... Murray. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: laakkone@klaava.Helsinki.FI (Tero Laakkonen) Subject: Q: petri nets? Organization: University of Helsinki hi, can anyone tell me about ftp'able docs about petri nets? i'm interested in finding out how they *work* (yes i know there's books but i don't have time to look for books). ADV - thanks - ANCE -- "i abhor you pretentious insight. i respect conscious guessing because it consists of two good qualities: courage and modesty." -imre lakatos Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jonathan@cs.ualberta.ca (Jonathan Schaeffer) Subject: experiments in assessing the "usability" of pp models/tools Organization: University of Alberta, Edmonton, Canada There are many different models for writing parallel programs, including new languages, modifications to existing languages, library calls, etc. These models may be realized in a variety of parallel programming tools. Do the readers of theis newsgroup know whether anyone has done any experiments with these tools to assess their usability? How easy are they to learn? How quickly can users get solutions? How good are the solutions? What are the usability differences for novices versus experts? We are interested in experiments with human subjects. For example, a controlled experiment might have one group of programmers learn Linda and one group learn PVM. One could then measure how the subjects perform when given a variety of problems to solve. In the the sequential world a number of studies like this have been performed. We know of few references for the parallel world. Do you know of any references? Thank you Jonathan Schaeffer Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edsr!jlb@uunet.UU.NET (Jeff Buchmiller) Subject: Re: The Future of Parallel Computing Reply-To: edsr!jlb@uunet.UU.NET Organization: Electronic Data Systems In <1993Nov4.163540.2008@hubcap.clemson.edu> elm@cs.berkeley.edu (ethan miller) writes: > The goal of compiling for parallel code should NOT >necessarily be "the best possible code;" it should be "reasonably >close to the best possible code." OK, cool. Sometimes optimization is worth it, sometimes not. But PLEASE don't forget about portability. If the same code can be compiled onto multiple architectures, it will make the programmer's job MUCH MUCH easier. (Tune to new architecture, instead of rewrite from scratch.) If someone never has to change architectures, s/he is lucky and/or too narrow-minded in his/her works. --jeff -- Jeff Buchmiller Electronic Data Systems R&D Dallas, TX jlb@edsr.eds.com ----------------------------------------------------------------------------- Disclaimer: This E-mail/article is not an official business record of EDS. Any opinions expressed do not necessarily represent those of EDS. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Douglas Hanley Subject: Distributed memory MIMD algorithms Organization: Department of Computer Science, University of Edinburgh Would somebody be kind enough to point me in the direction of information regarding the design and analysis of distributed memory MIMD algorithms, especially sorting and parallel prefix. It would be helpful if the analysis parameterised communication latency so that algorithmic performance could be estimated across MIMD architectures with varying interconnection networks. Thanks +-----------------------------------------------------------------------------+ | Douglas G Hanley, Dept of Computer Science, University of Edinburgh | | dgh@dcs.ed.ac.uk | +-----------------------------------------------------------------------------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zaky@cs.nps.navy.mil (Amr Zaky) Subject: References Required About Task Graph Partitioning I will be very grateful to any pointers regarding previous or current work relevant to the following problem. Given 1) Undirected task graph, where every node represents a task and its weight denotes the task's processing time, and edge weights denote the amount of communication between nodes, and 2) A distributed memory multiprocessor represented as a graph whose nodes are homogeneous processors and whose edges are links between the processors, Find a partitioning of the task graph onto the multiprocessor so as to minimize some useful parameter (I am looking for that!!), without ignoring link delays and link contention. I am thinking of the following parameter for efficiency: Let Xi be the amount of time some resource (processor or link) is utilized. Find the partition that will minimize Y-Z, where Y=max(Xi), Z=min(Xi) i=1...number of resources. Any Pointers? mail to zaky@cs.nps.navy.mil Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Siegfried Grabner Subject: Want Dissertation by S. Hammond dear netters, i am looking for the following Ph.D. thesis by S. Hammond: S. Hammond, Mapping unstructured grid computations to massively parallel computers, Ph.D. thesis, Rensselear Polytechnic Institute, Dept. of Computer Science, Renesselear, Ny, 1992. this thesis is referenced in a tech. rep. of sandia labs. (tech. rep. SAND92-1460): "An Improved Spectral Graph Partitioning Algorithm for Mapping Parallel Computations". If anybody out there knows where to get this thesis from or if anybody knows the author or his email address, respectively, please let me know. It is impossible for me to get this thesis because when i asked our librarian she told me that in austria it is nearly impossible to get an answer from an american university (a bit strange, what a shame :( ). thanks for helping me siegi * Siegfried Grabner * * Dept. for Graphics and Parallel Processing (GUP-Linz) * * Johannes Kepler University Linz * * Altenbergerstr. 69, A-4040 Linz, AUSTRIA, EUROPE * Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: To the two volunteers who recently said they would join moderators Some how I've lost one of your mail address. Would the two of you who said, probably yesterday, you'd like to join the group, please send me a message. I have the following six addresses berryman-harry@CS.YALE.EDU, bigrigg@cs.pitt.edu, killian@epcc.ed.ac.uk, kumbra@point.cs.uwm.edu, richard.muise@acadiau.ca, seo@pacific.cse.psu.edu Steve Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Moderator Replacement---Looks like we'll have plenty Thanks to all who have volunteered to moderate. We have seven folks who are willing to participate. Steve Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,comp.theory,comp.org.ieee,info.theorynt From: das@ponder.csci.unt.edu (Sajal Das) Subject: Call for Papers Reply-To: comp-org-ieee@zeus.ieee.org Organization: University of North Texas, Denton ******************* * CALL FOR PAPERS * ******************* JOURNAL OF COMPUTER & SOFTWARE ENGINEERING -------------------------------------------- SPECIAL ISSUE on PARALLEL ALGORITHMS & ARCHITECTURES (Tentative Publication Date: January 1995) Due to fundamental physical limitations on processing speeds of sequential computers, the future-generation high performance computing environment will eventually rely entirely on exploiting the inherent parallelism in problems and implementing their solutions on realistic parallel machines. Just as the processing speeds of chips are approaching their physical limits, the need for faster computations is increasing at an even faster rate. For example, ten years ago there was virtually no general-purpose parallel computer available commercially. Now there are several machines, some of which have received wide acceptance due to reasonable cost and attractive performance. The purpose of this special issue is to focus on the desgin and analysis of efficient parallel algorithms and their performance on different parallel architectures. We expect to have a good blend of theory and practice. In addition to theoretical papers on parallel algorithms, case studies and experience reports on applications of these algorithms in real-life problems are especially welcome. Example topics include, but are not limited to, the following: Parallel Algorithms and Applications. Machine Models and Architectures. Communication, Synchronization and Scheduling. Mapping Algorithms on Architectures. Performance Evaluation of Multiprocessor Systems. Parallel Data Structures. Parallel Programming and Software Tools. *********************************************************************** Please submit SEVEN copies of your manuscript to either of the * Guest Editors by May 1, 1994: * * *********************************************************************** Professor Sajal K. Das || Professor Pradip K. Srimani * Department of Computer Science || Department of Computer Science * University of North Texas || Colorado State University * Denton, TX 76203 || Ft. Collins, CO 80523 * Tel: (817) 565-4256, -2799 (fax) || Tel: (303) 491-7097, -6639 (fax) * Email: das@cs.unt.edu || Email: srimani@CS.ColoState.Edu * *********************************************************************** INSTRUCTIONS FOR SUBMITTING PAPERS: Papers should be 20--30 double spaced pages including figures, tables and references. Papers should not have been previously published, nor currently submitted elsewhere for publication. Papers should include a title page containing title, authors' names and affiliations, postal and e-mail addresses, telephone numbers and Fax numbers. Papers should include a 300-word abstract. If you are willing to referee papers for this special issue, please send a note with research interest to either of the guest editors. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,comp.theory,comp.org.ieee,info.theorynt From: das@ponder.csci.unt.edu (Sajal Das) Subject: Call for Papers Originator: comp-org-ieee@zeus.ieee.org Sender: comp-org-ieee@zeus.ieee.org Reply-To: comp-org-ieee@zeus.ieee.org Organization: University of North Texas, Denton Date: Thu, 11 Nov 1993 15:47:40 GMT Apparently-To: comp-parallel@rutgers.edu ******************* * CALL FOR PAPERS * ******************* JOURNAL OF COMPUTER & SOFTWARE ENGINEERING -------------------------------------------- SPECIAL ISSUE on PARALLEL ALGORITHMS & ARCHITECTURES (Tentative Publication Date: January 1995) Due to fundamental physical limitations on processing speeds of sequential computers, the future-generation high performance computing environment will eventually rely entirely on exploiting the inherent parallelism in problems and implementing their solutions on realistic parallel machines. Just as the processing speeds of chips are approaching their physical limits, the need for faster computations is increasing at an even faster rate. For example, ten years ago there was virtually no general-purpose parallel computer available commercially. Now there are several machines, some of which have received wide acceptance due to reasonable cost and attractive performance. The purpose of this special issue is to focus on the desgin and analysis of efficient parallel algorithms and their performance on different parallel architectures. We expect to have a good blend of theory and practice. In addition to theoretical papers on parallel algorithms, case studies and experience reports on applications of these algorithms in real-life problems are especially welcome. Example topics include, but are not limited to, the following: Parallel Algorithms and Applications. Machine Models and Architectures. Communication, Synchronization and Scheduling. Mapping Algorithms on Architectures. Performance Evaluation of Multiprocessor Systems. Parallel Data Structures. Parallel Programming and Software Tools. *********************************************************************** Please submit SEVEN copies of your manuscript to either of the * Guest Editors by May 1, 1994: * * *********************************************************************** Professor Sajal K. Das || Professor Pradip K. Srimani * Department of Computer Science || Department of Computer Science * University of North Texas || Colorado State University * Denton, TX 76203 || Ft. Collins, CO 80523 * Tel: (817) 565-4256, -2799 (fax) || Tel: (303) 491-7097, -6639 (fax) * Email: das@cs.unt.edu || Email: srimani@CS.ColoState.Edu * *********************************************************************** INSTRUCTIONS FOR SUBMITTING PAPERS: Papers should be 20--30 double spaced pages including figures, tables and references. Papers should not have been previously published, nor currently submitted elsewhere for publication. Papers should include a title page containing title, authors' names and affiliations, postal and e-mail addresses, telephone numbers and Fax numbers. Papers should include a 300-word abstract. If you are willing to referee papers for this special issue, please send a note with research interest to either of the guest editors. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cychong@magnus.acs.ohio-state.edu (Robert Chong) Subject: Need introductory parallel programming book Date: 11 Nov 1993 16:45:35 GMT Organization: The Ohio State University Nntp-Posting-Host: top.magnus.acs.ohio-state.edu Hi, everyone, Can anyone suggest an introductory and language independent book which can teach me to write parallel programs/algorithms? Thanks. -- Robert Chong Department of Mechanical Engineering The Ohio State University Email: cychong@magnus.acs.ohio-state.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: thompson@inmos.co.uk () Subject: INMOS Information available on T9000 Sender: news@inmos.co.uk Reply-To: thompson@aust.inmos.co.uk () Organization: INMOS Architecture Group Date: Thu, 11 Nov 1993 16:52:17 GMT Apparently-To: hypercube@hubcap.clemson.edu Some information regarding the IMS T9000 transputer is now available on the INMOS FTP server, ftp.inmos.co.uk [192.26.234.3], in the directory /inmos/info/T9000. This relates mainly to the superscalar processor and the cache memory system at present. Further information regarding the INMOS communications architecture is available in the directory /inmos/info/comms, including a compressed PostScript copy of the book "Networks, Routers and Transputers", and some papers, in the subdirectories book/ and papers/ respectively. -- Peter Thompson INTERNET: thompson@inmos.co.uk INMOS is a member INMOS Ltd JANET: thompson@uk.co.inmos of the SGS-Thomson 1000 Aztec West UUCP: uknet!inmos!thompson Microelectronics Bristol BS12 4SQ, U.K. Phone/FAX: +44 454 611564/617910 Group -- Peter Thompson INTERNET: thompson@inmos.co.uk INMOS is a member INMOS Ltd JANET: thompson@uk.co.inmos of the SGS-Thomson 1000 Aztec West UUCP: uknet!inmos!thompson Microelectronics Bristol BS12 4SQ, U.K. Phone/FAX: +44 454 611564/617910 Group Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: forge@netcom.com (FORGE Customer Support) Subject: News from Applied Parallel Research Summary: Latest Developments in Parallization Tools from APR Organization: Applied Parallel Research, Inc. Date: Thu, 11 Nov 1993 19:31:34 GMT Apparently-To: comp-parallel@uunet.uu.net +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= The Latest News from Applied Parallel Research... +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= November 1993 As we enter our third year, we are more excited than ever about the latest additions to our FORGE family of products, and the growing number of vendors and programmers who are using them. =-=-=-= MAGIC =-=-=-= At the top of our list of new things we want to tell you about are our new MAGIC batch parallelizers that we are announcing at Supercomputing 93 in Portland. FORGE Magic/DM Parallelizer (dpf) for distributed memory is able to automatically (automagically?) partition data arrays and distribute loops based upon a static analysis of the source program. Or, you can supply a serial timing profile to direct the automatic parallelization right to the hot spots in your code. With FORGE Magic/SM (spf) for shared memory systems, data arrays are automatically padded and aligned for optimal cache management, and DO loops are parallelized by target system compiler-specific directives. It would be outrageous of us to claim that our MAGIC technology can automatically produce the best parallelization strategy for all applications, and we won't. But the one important claim we do make is that it is an incredible way to get a first rough sketch at a parallelization. This may be especially useful with large, unwieldly codes when, most likely, you would not have a clue as to where to begin. A parallelization report shows you in great detail which loops parallelized and which data arrays were partitioned, and how this was done. More importantly, it shows which loops/arrays could not be parallelized and the inhibitors in the program that prevented this. An output option annotates the original Fortran 77 program with parallelization directives that you can amend to refine the parallelization. Our intention with these MAGIC parallelizing pre-compilers is to provide facilities similar to what we used to vectorize code not too long ago. Each can be used to generate instrumented programs for serial runtime execution timing. FORGE Magic/DM (dpf) can also instrument the generated parallelized code to produce parallel runtime performance profiles that identify communication bottlenecks and losses due to poor load balancing. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= High Performance Fortran, HPF =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Our new FORGE HPF batch pre-compiler, xhpf, is now available with an optional MAGIC automatic parallelization mode as well. xhpf with MAGIC is able to take a serial Fortran 77 program and automatically generate a parallelized code with Fortran 90 array syntax and HPF directives. Our HPF pre-compiler has a number of capabilities that may prove invaluable. For example, the HPF consistency checker assures that the parallelization directives you supply are legal HPF and are consistent with themselves and the program. Also the parallelization is viewable from our interactive FORGE/DMP Parallelizer through is compatible database. And, if your target system does not yet have an HPF compiler, xhpf, like dpf, will generate a SPMD Fortran 77 code with explicit message passing calls interfacing to PVM, Express, Linda, etc. You may know that other companies are struggling right now to provide HPF compilers on a number of systems. However, we can report the following regarding APRs HPF tools: * They are available today. * We generate efficient Fortran 77 code from the HPF that is immediately compilable and optimizable by most native f77 compilers. * We parallelize Fortran DO loops as well as Fortran 90 array syntax. (HPF compilers only parallelize array syntax.) * MAGIC on xhpf will generate an initial HPF parallelization automatically for you to start with. * You can review and analyze the parallelization with our FORGE/DMP interactive tool. * You can instrument the parallel code and obtain a parallel runtime performance profile that includes measurement of all communication costs and bottlenecks. * With our unique parallel runtime library, we can interface to all major multiprocessor systems and workstation clusters running PVM, Express, Linda, IBM EUI, nCUBE, Intel NT, you name it!. We welcome the opportunity to demonstrate our HPF capabilities to you if you give us a call. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= FORGE Motif GUI Fortran Explorer =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Another major development is the release of our first interactive product to utilize the Motif graphic user interface. FORGE Explorer, based upon the FORGE Baseline Browser, is now available on IBM RS/6000, DEC Alpha, and HP workstations supporting the Motif GUI. It presents an easy to use and mostly intuitive approach to interprocedural program data and control flow analysis, tracing, and global context searching. We are moving to transform all FORGE interactive products into the world of Motif by the end of next year. FORGE Explorer is actually fun to use... you've got to see it! =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Advanced Shared Memory Parallelizers =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= The last area of product development wed like to mention is the release of advanced, new shared memory parallelizers that are able to optimize program cache management by padding and alignment of global arrays automatically. We also have developed a Global Index Reordering (GIR) restructurer that can make a positive gain on performance by automatically reordering the indices of arrays in inner loops to eliminate inappropriate striding through memory. This restructuring, which is so tedious and error prone when attempted by hand, can render a successful parallelization out of a marginal performer. =-=-=-=-=-=-=-= Vendor Support =-=-=-=-=-=-=-= A growing number of supercomputing vendors are now actively supporting the ongoing development of APRs products for their shared and distributed memory multiprocessor systems: IBM (SP1, Power/4, and RS/6000 clusters), Fujitsu (VPP500), Intel (Paragon), nCUBE, HP, DEC, Cray Computers (Cray 3). We also offer our products directly to end users of SGI, Cray Research, and Convex systems. =-=-=-=-= Further! =-=-=-=-= We look forward to a new year of challenges to provide the ultimate tools for parallel processing, and hope to be speaking with you soon. If you will be attending SuperComputing 93 in Portland, stop by and say hello and catch a demonstration of these products -- we will be in booth 401. John Levesque, President, Applied Parallel Research, Inc. ...for the APR Staff Applied Parallel Research, Inc. 550 Main St., Suite I Placerville, CA 95667 916/621-1600 forge@netcom.com -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: forge@netcom.com (FORGE Customer Support) Subject: Automatic Parallelization News Release Summary: APR Announces Automatic Parallelization Tools for Fortran Keywords: Automatic Parallelization Fortran Organization: Applied Parallel Research, Inc. Date: Thu, 11 Nov 1993 19:34:25 GMT Apparently-To: comp-parallel@uunet.uu.net ..November 11, 1993 NEWS RELEASE Applied Parallel Research, Inc. 550 Main St., Suite I Placerville, CA 95667 Robert Enk, Sales and Marketing (301) 718-3733, Fax: (301) 718-3734 ----------------------------------------------------------------------- FOR IMMEDIATE RELEASE Applied Parallel Research announces the addition of two revolutionary products to the FORGE family of parallelization tools for Fortran and significant enhancements to its current set of products. Placerville, California, USA, November 11, 1993 -- Applied Parallel Research Inc. (APR) announces its MAGIC series of automatic parallelizing pre-compilers, FORGE Magic/DM for distributed memory systems and clustered workstations, and FORGE Magic/SM for shared memory parallel systems. Together these products represent the state-of-the-art in parallelization technology and take a giant step forward in providing development tools critical to the successful utilization of parallel processing systems. Magic/DM represents the first production quality automatic parallelization facility for distributed memory systems. Sophisticated interprocedural analysis allows Magic/DM to automatically identify the most significant loops within a program and to develop a parallelization strategy based upon those loops and the arrays they reference. On Initial benchmarks, Magic/DM has generated parallel code that achieves 80% of the performance obtained from hand parallelization. Optionally, Magic/DM can create an output file that is the original Fortran code with APR parallelization directives strategically embedded in the program. A detailed parallelization report is also available which describes for the programmer which arrays were partitioned and how the loops were parallelized, and, most importantly, indicates where parallelization could not be accomplished and what inhibitors are causing the problem. This output forms the basis of a first parallelization which the programmer can further refine through the use of parallel statistics gathering and APR Directives. Dan Anderson of the National Center for Atmospheric Research said, "This is just what our users need, a useable tool that not only parallelizes as much as possible, but also generates useful diagnostics that can be used to hand tune the application." Magic/SM is also an automatic batch parallelization tool but directed towards multi CPU shared memory systems. Magic/SM automatically analyzes candidate loops for parallelization and annotates the original program with the target systems compiler specific directives. It also produces a detailed parallelization report which can be used for further refinement of the parallelization. APR's HPF Compilation System, xHPF, has been enhanced to include an Auto Parallelization option. A user is now able to input a Fortran 77 program with optional sequential timing information to xHPF and generate a parallelized source file with Fortran 90 array syntax and HPF directives. This facility allows organizations that might standardize on HPF to convert their existing Fortran 77 programs to HPF without expensive and time consuming hand conversion. John Levesque, President of Applied Parallel Research said, "With the addition of these automatic parallelization products and enhancements, APR is able to offer the most complete and sophisticated set of Fortran parallelization tools in the industry. The FORGE Magic products provide the same ease of use for parallel computing systems that vectorizing compilers and pre-compilers have provided to users of vector machines. APR's combination of batch and interactive products can now address the needs of first time parallel system users as well as seasoned parallel programmers." APR's source code browser, FORGE Baseline has been enhanced and redesignated FORGE Explorer. FORGE Explorer is APR's first product to utilize the Motif graphic user interface and has been significantly restructured for ease of use in providing control flow information, variable usage and context sensitive query functions. Information on APR's product can be obtained by contacting Robert Enk, VP of Sales and Marketing at (301) 718-3733 or by E-mail at enk@netcom.com. -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: math1pf@elroy.uh.edu (Zheng, Ping) Subject: Hypercube expert wanted. Short term but excellent pay ! Organization: University of Houston References: <1993Nov10.132836.21343@hubcap.clemson.edu> Nntp-Posting-Host: elroy.uh.edu News-Software: VAX/VMS VNEWS 1.41 We need someone who is very familier with hypercube architectures and algorithms to be our short term consultant( yes, via email). We will pay $30 to $40 an hour depending on your qualification. If interested please reply contact me through this account or call (713)527-0468. Thanks Ping Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: The Future of Parallel Computing Date: 11 Nov 1993 22:51:17 GMT Organization: Professional Student, University of Maryland, College Park References: <1993Nov11.142432.8924@hubcap.clemson.edu> Nntp-Posting-Host: coke.eng.umd.edu Originator: dbader@coke.eng.umd.edu In article <1993Nov11.142432.8924@hubcap.clemson.edu>, Murray Cole writes: >parallel hardware". I would like to suggest (for discussion at least!) >that the reverse might be the case. We now have books full of snappy >parallel algorithms, whether for abstract models such as PRAM or for However, the field of parallel algorithmics is still in its infancy. Five years ago, almost no books existed on the subject. A parallel algorithm must be designed with some concept of an abstract model of the parallel hardware, whether this model is implicit (such as a PRAM algorithm) or explicit (such as a hypercube or mesh algorithm). Granted, we have found some "obvious" parallel algorithms, and labelled them as parallel primitives, such as parallel scan operations, and block permutations, etc. But I do not believe we have the "canonical" representation for a parallel algorithm yet. >Unfortunately, most of them don't seem to run too well on real parallel >machines, even when there is no mismatch in the architectural structure. Which parallel algorithms do not perform "well" on real parallel machines? In all my research I have yet to find a parallel algorithm running on a parallel machine which does not have some speedup from the uniprocessor case. (Disclaimer to follow:) >parallel algorithms with reasonable efficiency. For example, why shouldn't >I expect my simple summation algorithm for n values on n hypercube processors >to run a lot faster than a sequential summation on one of those processors? A parallel algorithm exhibits a result faster (in a relative sense) on a parallel machine than a single processor when its parallel execution time on "p" processors, including overheads, is less than the sequential execution time. You need to look at the "granularity" of a problem to decide whether it will perform faster on a parallel machine. (For an introduction to granularity, see Stone, "High Performance Computer Architecture", Section 6.2). If your machine is meant for course-grained problems (such as the case you outline above), you will need to sum "n > N" numbers to see a speedup, where "N" is some large threshold for the given algorithm and machine size. -david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djpatel@chaph.usc.edu (Dhiren Jeram Patel) Subject: Need info on MasPar and CM-5 parallel machines. Organization: University of Southern California, Los Angeles, CA Sender: djpatel@chaph.usc.edu Hi, I need some information on the MasPar and Thinking Machine CM-5 parallel computers. Specifically, I need information on architecture issues such as interconnect/routing and prefetching. I'd appreciate help from anyone who can point me in the right direction. (I've already looked in the University library, but I didn't find much. Mabey I wasn't looking in the right place.) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: udo@mars.informatik.rwth-aachen.de (Udo Brocker) Subject: group leader position Organization: Rechnerbetrieb Informatik - RWTH Aachen Nntp-Posting-Host: mars.lfbs.rwth-aachen.de Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Keywords: operating-systems parallel-systems !!!!!!!! Working group leader position available !!!!!! At the Institute of Operating Systems at RWTH Aachen University of Technology, Germany the following positions are available: Manager of Operating System Research Group ------------------------------------------ The leader of the working group on operating system research is responsible for all duties of the group on operating systems (currently 3 staff plus students) within the institute. He will direct the R&D activities of the group, represent the group within the institute, the university and outside (industry, funding agencies). Apart from his research activities his main duties are participation in the teaching of the institute, to acquire research funds and to manage the group. Focus of the groups R&D activities (and therefore the managers) should be the enhancement of micro-kernel based multicomputer operating systems for parallel supercomputers, preferable Mach and OSF/1. Selected research topics might be scalability and performance issues, implementation of parallel programming models, shared virtual memory, resource scheduling, etc. The position offers the opportunity for further scientific qualification (Habilitation, Dissertation). In case of Habilitation a PhD in Computer Science, Electrical Engineering or a different engineering subject is a must. Spirit for team work, well managed projects and industry orientation will be assessed very positively. The temporary assignment is for 6 years usually (3 years initially plus 3 years extension). Compensation follows the German university system (C1 or C2). Interested candidates may send e-mail (short resume sufficient for first contact) or phone: Udo Brocker e-mail: udo@lfbs.rwth-aachen.de Tel.: +49-241-80-7635 ...................................................................... | _ Udo Brocker, Lehrstuhl fuer Betriebssysteme, RWTH Aachen |_|_`__ Kopernikusstr. 16, D-52056 Aachen, | |__) Tel.: +49-241-807635; Fax: +49-241-806346 |__) email: udo@lfbs.rwth-aachen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djb1@ukc.ac.uk Newsgroups: comp.sys.transputer,comp.parallel,comp.dcom.cell-relay Subject: Networks, Routers and Transputers Book available via ftp Date: Fri, 12 Nov 93 13:53:00 GMT Organization: Computing Lab, University of Kent at Canterbury, UK. Keywords: networks, routers, transputers, virtual channels, ATM We are pleased to announce the FREE electronic distribution of the following book: Networks, Routers and Transputers: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Function, Performance and Applications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Editted by: M.D. May, P.W. Thompson and P.H. Welch Written by: C.J. Adams, C. Barnaby, J.W. Burren, R. Francis, V.A. Griffiths, H. Gurney, J.M. Kerridge, P.F. Linnington, M.D. May, D.A. Nicole, N. Richards, R.M. Shepherd, M. Simpson, P.W. Thompson, C.P.H. Walker, P.H. Welch and J.M. Wilson The introduction of high-speed serial communication links and general purpose VLSI routers offers new opportunites in system design. C104 routers can be used to construct high throughput low latency interconnection networks for use in telecommunications, parallel computers and electronic systems in general. The T9000 transputer with its integrated communications links can be connected directly to these networks, providing high performance data-handling, protocol conversion and network control. The first chapters cover the rationale behind the design of the new links, the C104 universal packet router and the T9000 'virtual channel' processor. Other chapters deal with interconnection networks for parallel computers and specific topics relevant to building powerful systems from C104 routers and T9000 transputers. The book ends with detailed application case-studies: very large (parallel) database machines, high-performance switches for the CCITT Asynchronous Transfer Mode (ATM) broadband networks (both 'public' and 'on-your-desk') and distributed scalable multimedia systems (built on ATM). This book was written by the technical design-engineers at INMOS who were responsible for the major decisions, plus independent experts associated with the database, networking and multi-media industries. Paper Version ~~~~~~~~~~~~~ IOS Press publish the physical version: Networks, Routers and Transputers: Function, Performance and Applications Edited by M.D. May, P.W. Thompson and P.H. Welch IOS Press 1993, 210pp, hard cover, ISBN: 90 5199 129 0 Price: US$80 / GBP 55 UK and Ireland USA and Canada Rest of World IOS Press/Lavis Marketing IOS Press, Inc. IOS Press 73 Lime Walk Postal Drawer 10558 Van Diemenstraat 94 Headington Burke, VA 22009-0558 1013 CN Amsterdam Oxford OX3 7AD USA Netherlands England FAX: +44 (0)865 75 0079 FAX +1 703 250 47 05 FAX: +31 20 620 34 19 The Electronic Version ~~~~~~~~~~~~~~~~~~~~~~ The book is now available in electronic form for personal use only (NOT to be resold) courtesty of the publishers (IOS Press) and the copyright holders (INMOS Ltd and the authors). This is available from the Transputer, occam and parallel computing archive at unix.hensa.ac.uk in /parallel/books/ios/nrat. The book is split up for ease of access with each chapter present as one or more PostScript file, packed using the compress(1) program. The files are: -rw-r--r-- 1 djb1 26981 Nov 9 16:08 Overview -rw-r--r-- 1 djb1 109591 Oct 12 16:46 Introduction.ps.Z -rw-r--r-- 1 djb1 94295 Oct 12 16:34 Chapter1.ps.Z -rw-r--r-- 1 djb1 114029 Oct 12 16:41 Chapter2.ps.Z -rw-r--r-- 1 djb1 113643 Oct 12 16:41 Chapter3.ps.Z -rw-r--r-- 1 djb1 155458 Oct 12 16:41 Chapter4.ps.Z -rw-r--r-- 1 djb1 188313 Oct 12 16:42 Chapter5.ps.Z -rw-r--r-- 1 djb1 446783 Oct 12 16:43 Chapter6a.ps.Z -rw-r--r-- 1 djb1 374537 Oct 12 16:44 Chapter6b.ps.Z -rw-r--r-- 1 djb1 140311 Oct 12 16:44 Chapter7.ps.Z -rw-r--r-- 1 djb1 115529 Oct 12 16:45 Chapter8.ps.Z -rw-r--r-- 1 djb1 237419 Oct 12 16:46 Chapter9.ps.Z -rw-r--r-- 1 djb1 504682 Oct 12 16:36 Chapter10a.ps.Z -rw-r--r-- 1 djb1 599387 Oct 12 16:37 Chapter10b.ps.Z -rw-r--r-- 1 djb1 676387 Oct 12 16:39 Chapter10c.ps.Z -rw-r--r-- 1 djb1 179450 Oct 12 16:40 Chapter10d.ps.Z -rw-r--r-- 1 djb1 271046 Oct 12 16:40 Chapter11.ps.Z -rw-r--r-- 1 djb1 428921 Oct 12 16:34 Appendices.ps.Z The file Overview contains the description of each chapter and follows at the end of this article. You can get the book via either anonymous ftp by contacting site: unix.hensa.ac.uk (129.12.21.7) path: /parallel/books/ios/nrat/ or via emailing mail archive@unix.hensa.ac.uk with the contents send /parallel/books/ios/nrat/ for each in the list above. We _STRONGLY_ suggest you get the Overview file first (or read the file below) and pick the chapters that interest you rather than taking all 5 megabytes of files. Enjoy Dave Beckett / Peter Thompson ------------------------------------------------------------------------ Networks, Routers and Transputers: Function, Performance and Applications Edited by: M.D. May, P.W. Thompson and P.H. Welch Preface High speed networks are an essential part of public and private telephone and computer communications systems. An important new development is the use of networks within electronic systems to form the connections between boards, chips and even the subsystems of a chip. This trend will continue over the 1990s, with networks becoming the preferred technology for system interconnection. Two important technological advances have fuelled the development of interconnection networks. First, it has proved possible to design high-speed links able to operate reliably between the terminal pins of VLSI chips. Second, high levels of component integration permit the construction of VLSI routers which dynamically route messages via their links. These same two advances have allowed the development of embedded VLSI computers to provide functions such as network management and data conversion. Networks built from VLSI routers have important properties for system designers. They can provide high data throughput and low delay; they are scalable up to very large numbers of terminals; and they can support communication on all of their terminals at the same time. In addition, the network links require only a small number of connection points on chips and circuit boards. The most complex routing problems are moved to the place where they can be done most easily and economically - within the VLSI routers. The first half of this book brings together a collection of topics in the construction of communication networks. The first chapters are concerned with the technologies for network construction. They cover the design of networks in terms of standard links and VLSI routing chips, together with those aspects of the transputer which are directly relevant to its use for embedded network computing functions. Two chapters cover performance modelling of links and networks, showing the factors which must be taken into consideration in network design. The second half of the book brings together a collection of topics in the application of communication networks. These include the design of interconnection networks for high-performance parallel computers, and the design of parallel database systems. The final chapters discuss the construction of large-scale networks which meet the emerging ATM protocol standards for public and private communications systems. The 1990s will see the progressive integration of computing and communications: networks will connect computers; computers will be embedded within networks; networks will be embedded within computers. Thus this book is intended for all those involved in the design of the next generation of computing and communications systems. February 1993 Work on this subject has been supported under various ESPRIT projects, in particular `Parallel Universal Message-passing Architecture' (PUMA, P2701), and more recently also under the `General Purpose MIMD' (P5404) project. The assistance of the EC is gratefully acknowledged. Contents [ Introduction.ps.Z - 109591 bytes ] 1 Transputers and Routers: Components for Concurrent Machines [ Chapter1.ps.Z - 94295 bytes ] 1.1 Introduction 1.2 Transputers 1.3 Routers 1.4 Message Routing 1.5 Addressing 1.6 Universal Routing 1.7 Conclusions 2 The T9000 Communications Architecture [ Chapter2.ps.Z - 114029 bytes ] 2.1 Introduction 2.2 The IMS T9000 2.3 Instruction set basics and processes 2.4 Implementation of Communications 2.5 Alternative input 2.6 Shared channels and Resources 2.7 Use of resources 2.8 Conclusion 3 DS-Links and C104 Routers [ Chapter3.ps.Z - 113643 bytes ] 3.1 Introduction 3.2 Using links between devices 3.3 Levels of link protocol 3.4 Channel communication 3.5 Errors on links 3.6 Network communications: the IMS C104 3.7 Conclusion 4 Connecting DS-Links [ Chapter4.ps.Z - 155458 bytes ] 4.1 Introduction 4.2 Signal properties of transputer links 4.3 PCB connections 4.4 Cable connections 4.5 Error Rates 4.6 Optical interconnections 4.7 Standards 4.8 Conclusions 4.9 References 4.10 Manufacturers and products referred to 5 Using Links for System Control [ Chapter5.ps.Z - 188313 bytes ] 5.1 Introduction 5.2 Control networks 5.3 System initialization 5.4 Debugging 5.5 Errors 5.6 Embedded applications 5.7 Control system 5.8 Commands 5.9 Conclusions 6 Models of DS-Link Performance [ Chapter6a.ps.Z - 446783 bytes ] [ Chapter6b.ps.Z - 374537 bytes ] 6.1 Performance of the DS-Link Protocol 6.2 Bandwidth Effects of Latency 6.3 A model of Contention in a Single C104 6.4 Summary 7 Performance of C104 Networks [ Chapter7.ps.Z - 140311 bytes ] 7.1 The C104 switch 7.2 Networks and Routing Algorithms 7.3 The Networks Investigated 7.4 The traffic patterns 7.5 Universal Routing 7.6 Results 7.7 Performance Predictability 7.8 Conclusions 8 General Purpose Parallel Computers [ Chapter8.ps.Z - 115529 bytes ] 8.1 Introduction 8.2 Universal message passing machines 8.3 Networks for Universal message passing machines 8.4 Building Universal Parallel Computers from T9000s and C104s 8.5 Summary 9 The Implementation of Large Parallel Database Machines on T9000 and C104 Networks [ Chapter9.ps.Z - 237419 bytes ] 9.1 Database Machines 9.2 Review of the T8 Design 9.3 An Interconnection Strategy 9.4 Data Storage 9.5 Interconnection Strategy 9.6 Relational Processing 9.7 Referential Integrity Processing 9.8 Concurrency Management 9.9 Complex Data Types 9.10 Recovery 9.11 Resource Allocation and Scalability 9.12 Conclusions 10 A Generic Architecture for ATM Systems [ Chapter10a.ps.Z - 504682 bytes ] [ Chapter10b.ps.Z - 599387 bytes ] [ Chapter10c.ps.Z - 676387 bytes ] [ Chapter10d.ps.Z - 179450 bytes ] 10.1 Introduction 10.2 An Introduction to Asynchronous Transfer Mode 10.3 ATM Systems 10.4 Mapping ATM onto DS-Links 10.5 Conclusions 11 An Enabling Infrastructure for a Distributed Multimedia Industry [ Chapter11.ps.Z - 271046 bytes ] 11.1 Introduction 11.2 Network Requirements for Multimedia 11.3 Integration and Scaling 11.4 Directions in networking technology 11.5 Convergence of Applications, Communications and Parallel Processing 11.6 A Multimedia Industry - the Need for Standard Interfaces 11.7 Outline of a Multimedia Architecture 11.8 Levels of conformance 11.9 Building stations from components 11.10 Mapping the Architecture onto Transputer Technology Appendices: [ Appendices.ps.Z - 428921 bytes ] A New link cable connector B Link waveforms C DS-Link Electrical specification D An Equivalent circuit for DS-Link Output Pads 1: Transputers and Routers: Components for Concurrent Machines M.D. May and P.W. Thompson [ Chapter1.ps.Z - 94295 bytes ] This chapter describes an architecture for concurrent machines constructed from two types of component: `transputers' and `routers'. In subsequent chapters we consider the details of these two components, and show the architecture can be adapted to include other types of component. A transputer is a complete microcomputer integrated in a single VLSI chip. Each transputer has a number of communication links, allowing transputers to be interconnected to form concurrent processing systems. The transputer instruction set contains instructions to send and receive messages through these links, minimizing delays in inter-transputer communication. Transputers can be directly connected to form specialised networks, or can be interconnected via routing chips. Routing chips are VLSI building blocks for interconnection networks: they can support system-wide message routing at high throughput and low delay. 2: The T9000 Communications Architecture M.D. May, R.M. Shepherd and P.W. Thompson [ Chapter2.ps.Z - 114029 bytes ] This chapter describes the communications capabilities implemented in the IMS T9000 transputer, and supported by the IMS C104 packet router, which is discussed in chapter 3. The T9000 retains the point-to-point synchronised message passing model implemented in first generation of transputers but extends it in two significant ways. The most important innovation of the T9000 is the virtualization of external communication. This allows any number of virtual links to be established over a single hardware link between two directly connected T9000s, and for virtual links to be established between T9000s connected by a routing network constructed from C104 routers. A second important innovation is the introduction of a many-one communication mechanism, the resource. This provides, amongst other things, an efficient distributed implementation of servers. 3: DS-Links and C104 Routers M. Simpson and P.W. Thompson [ Chapter3.ps.Z - 113643 bytes ] Millions of serial communication links have been shipped as an integral part of the transputer family of microprocessor devices. This `OS-Link', as it is known, provides a physical point-to-point connection between two processes running in separate processors. It is full-duplex, and has an exceptionally low implementation cost and an excellent record for reliability. Indeed, the OS-Link has been used in almost all sectors of the computer, telecommunications and electronics markets. Many of these links have been used without transputers, or with a transputer simply serving as an intelligent DMA controller. However, they are now a mature technology, and by today's standards their speed of 20 Mbits/s is relatively low. Since the introduction of the OS-Link, a new type of serial interconnect has evolved, known as the DS-Link. A major feature of the DS-Link is that it provides a physical connection over which any number of software (or `virtual') channels may be multiplexed; these can either be between two directly connected devices, or can be between any number of different devices, if the links are connected via (packet) routing switches. Other features include detection and location of the most likely errors, and a transmission speed of 100 Mbits/s, with 200 Mbits/s planned and further enhancement possible. Although DS-Links have been designed for processor to processor communication, they are equally appropriate for processor to memory communication and specialized applications such as disk drives, disk arrays, or communication systems. 4: Connecting DS-Links H. Gurney and C.P.H. Walker [ Chapter4.ps.Z - 155458 bytes ] Digital design engineers are accustomed to signals that behave as ones and zeros, although they have to be careful about dissipation and ground inductance, which become increasingly important as speeds increase. Communications engineers, on the other hand, are accustomed to disappearing signals. They design modems that send 19200 bits per second down telephone wires that were designed 90 years ago to carry 3.4KHz voice signals. Their signals go thousands of kilometers. They are used to multiplexing lots of slow signals down a single fast channel. They use repeaters, powered by the signal wires. Digital designers do not need all these communications techniques yet. But sending 100Mbits/s or more down a cable much longer than a meter has implications that are more analog than digital, which must be taken care of just like the dissipation and ground inductance problems, to ensure that signals still behave as ones and zeros. Actually, it is easy to overestimate the problems of these signal speeds. Engineers designing with ECL, even fifteen years ago, had to deal with some of the problems of transmitting such signals reliably, at least through printed circuit boards (PCBs), backplanes, and short cables. One of the best books on the subject is the Motorola `MECL System Design Handbook' by William R Blood, Jr., which explains about transmission lines in PCBs and cables. This shows waveforms of a 50MHz signal at the end of 50ft (15m) of twisted pair, and of a 350MHz signal at the end of 10ft (3m) of twisted pair, both with respectable signals. This chapter first discusses the signal properties of DS-Links. PCB and cable connections are then described, followed by a section on error rates: errors are much less frequent on transputer links than is normal in communications. A longer section introduces some of the characteristics of optical connections including optical fibre, which should be suitable for link connections up to 500m, using an interface chip to convert between the link and the fibre. A pointer is given towards possible standards for link connections. 5: Using Links for System Control J.M. Wilson [ Chapter5.ps.Z - 188313 bytes ] The T9000 family of devices includes processors and routers which have subsystems and interfaces which are highly flexible to match the requirements of a wide range of applications. In addition to the static configuration requirements of subsystems such as the memory interface of the T9000, the more dynamic aspects of a network of devices must be configured before application software is loaded. These more dynamic items include: - cache organization; - data link bit-rates; - virtual link control blocks; If T9000 processors are configured as stand-alone devices, the configurable subsystems will be initialized by instructions contained in a local ROM. When the devices are integrated as part of a network with a static configuration every processor in the network could also initialize these subsystems independently by executing code contained in a local ROM. Typically, however, networks of T9000 family devices contain routers as well as processors and executing code from a ROM is not an option for a routing device. As a consequence, routing devices must be configured under external control. During system development or for systems which are used for multiple applications a flexible configuration mechanism for processors is also required. Debugging of software and hardware on networks consisting of many devices is not a simple problem. The major difficulty is in monitoring the behavior of the system as an integrated whole rather than observing the individual behavior of the separate components. A flexible mechanism which allows monitoring tools to observe and manage every device in a network in a simple manner is essential in designing a system-wide debugging environment. 6: Models of DS-Link Performance C. Barnaby, V.A. Griffiths and P.W. Thompson [ Chapter6a.ps.Z - 446783 bytes ] [ Chapter6b.ps.Z - 374537 bytes ] This chapter contains analytic studies of the performance of DS-Links, the IMS T9000 virtual channel processor and the IMS C104 packet routing switch. The first section considers the overheads imposed by the various layers of the DS-Link protocol on the raw bit-rate. Results are presented for the limiting bandwidth as a function of message size, which show that the overheads are very moderate for all but the smallest messages (for which the cost of initiating and receiving a message will dominate in any case). The next section analyses the diminution of bandwidth caused by latency at both the token flow-control and packet-acknowledge layers of the protocol. The losses due to stalls at the packet level of the protocol when only a single virtual channel is active are plotted in the latter part of the section. The final section considers the performance of the C104 routing switch under heavy load, both in the average and the worst case. 7: Performance of C104 Networks C. Barnaby and M.D. May [ Chapter7.ps.Z - 140311 bytes ] The use of VLSI technology for specialised routing chips makes the construction of high-bandwidth, low-latency networks possible. One such chip is the IMS C104 packet routing chip, described in chapter 3. This can be used to build a variety of communication networks. In this chapter, interconnection networks are characterized by their throughput and delay. Three families of topology are investigated, and the throughput and delay are examined as the size of the network varies. Using deterministic routing (in which the same route is always used between source and destination), random traffic patterns and systematic traffic patterns are investigated on each of the networks. The results show that on each of the families examined, there is a systematic traffic pattern which severely affects the throughput of the network, and that this degradation is more severe for the larger networks. The use of universal routing, where an amount of random behavior is introduced, overcomes this problem and provides the scalability inherent to the network structure. This is also shown to be an efficient use of the available network links. An important factor in network performance is the predictability of the time it will take a packet to reach its destination. Deterministic routing is shown to give widely varying packet completion times with variation of the traffic pattern in the network. Universal routing is shown to remove this effect, with the time taken for a packet to reach its destination being stabilized. In the following investigation, we have separated issues of protocol overhead, such as flow control, from issues of network performance. 8: General Purpose Parallel Computers C. Barnaby, M.D. May and D.A. Nicole [ Chapter8.ps.Z - 115529 bytes ] Over the last decade, many different parallel computers have been developed, which have been used in a wide range of applications. Increasing levels of component integration, coupled with difficulties in further increasing clock speed of sequential machines, make parallel processing technically attractive. By the late 1990s, chips with 108 transistors will be in use, but design and production will continue to be most effective when applied to volume manufacture. A ``universal" parallel architecture would allow cheap, standard multiprocessors to become pervasive, in much the same way that the von Neumann architecture has allowed standard uniprocessors to take over from specialised electronics in many application areas. Scalable performance One of the major challenges for universal parallel architecture is to allow performance to scale with the number of processors. There are obvious limits to scalability: - For a given problem size, there will be a limit to the number of processors which can be used efficiently. However, we would expect it to be easy to increase the problem size to exploit more processors. - There will in practice be technological limits to the number of processors used. These will include physical size, power consumption, thermal density and reliability. However, as we expect performance/chip to achieve 100-1000 Mflops during the 1990s, the most significant markets will be served by machines with up to 100 processors. Software portability Another major challenge for a universal parallel architecture is to eliminate the need to design algorithms to match the details of specific machines. Algorithms must be based on features common to a large number of machines, and which can be expected to remain common to many machines as technology evolves. Both programmer and computer designer have much to gain from identifying the essential features of a universal parallel architecture: - the programmer because his programs will work on a variety of machines - and will continue to work on future machines. - the computer designer because he will be able to introduce new designs which make best use of technology to increase performance of the software already in use. 9: The Implementation of Large Parallel Database Machines on T9000 and C104 Networks J.M. Kerridge [ Chapter9.ps.Z - 237419 bytes ] The design of large database machines requires the resulting implementation be scalable and cheap. This means that use has to be made of commodity items whenever possible. The design also has to ensure that scalability is incorporated into the machine from its inception rather than as an after-thought. Scalability manifests itself in two different ways. First, the initial size of a system when it is installed should be determined by the performance and size requirements of the desired application at that time. Secondly, the system should be scalable as processing requirements change during the life-time of the system. The T9000 and C104 provide a means of designing a large parallel database machine which can be constructed from commodity components in a manner that permits easy scalability. 10: A Generic Architecture for ATM Systems C. Barnaby and N. Richards [ Chapter10a.ps.Z - 504682 bytes ] [ Chapter10b.ps.Z - 599387 bytes ] [ Chapter10c.ps.Z - 676387 bytes ] [ Chapter10d.ps.Z - 179450 bytes ] Introduction The rapid growth in the use of personal computers and high-performance workstations over the last ten years has fueled an enormous expansion in the data communications market. The desire to connect computers together to share information, common databases and applications led to the development of Local Area Networks and the emergence of distributed computing. At the same time, the geographical limitations of LANs and the desire to provide corporate-wide networks stimulated the development towards faster, more reliable telecommunications networks for LAN interconnection, with the need to support data as well as traditional voice traffic. The resulting increase in the use of digital technology and complex protocols has resulted in the need for enormous computing capability within the telecommunications network itself, with the consequent emergence of the concept of the Intelligent Network. With new, higher bandwidth applications such as video and multimedia on the horizon and user pressure for better, more seamless connection between computer networks, this convergence of computing and communications systems looks set to accelerate during the nineties. A key step in this convergence is the development by the CCITT of standards for the Broadband Integrated Services Digital Network (B-ISDN). B-ISDN seeks to provide a common infrastructure on which a wide variety of voice, data and video services can be provided, thereby eliminating (hopefully) the final barriers between the world of computer networks and the world of telecommunications. The technological basis for B-ISDN chosen by the CCITT is the Asynchronous Transfer Mode (ATM), a fast-packet switching technique using small, self-routing packets called cells. The single most important element which has driven the development of both distributed computing and the intelligent network is the microprocessor. Indeed, as systems such as telecommunications networks have come to look more like distributed computers, so microprocessor architectures which support distributed multi-processing have come to look like communications networks. A message-passing computer architecture, such as that of the transputer, shares much in common with a packet switching system and thus provides a natural architecture from which to build communication systems. The communications architecture of the latest generation transputer, the T9000, shares much in common with ATM and is thus a natural choice for the implementation of ATM systems. In this Chapter we describe the application of the transputer, in particular the serial links and packet routing capabilities of the communications architecture, to the design of ATM switching systems. We discuss their use in public switching systems and present a generic architecture for the implementation of private ATM switches and internetworking applications. We look at terminal adaption requirements and develop some ideas for interfacing transputers, routers and serial links to ATM networks. Finally, we consider various aspects of the performance of this architecture. 11: An Enabling Infrastructure for a Distributed Multimedia Industry C.J. Adams, J.W. Burren, J.M. Kerridge, P.F. Linnington, N. Richards and P.H. Welch [ Chapter11.ps.Z - 271046 bytes ] Advances in technology for telecommunication and new methods for handling media such as voice and video have made possible the creation of a new type of information system. Information systems have become an essential part of the modern world and they need to be made accessible to a very high proportion of the working population. It is therefore important to exploit all the means available for making the transfer of information effective and accurate. In fields such as computer assisted training, multimedia presentation is already well established as a tool for conveying complex ideas. So far, however, the application of multimedia solutions to information retrieval has been limited to single isolated systems, because the bulk of the information required has needed specialized storage techniques and has exceeded the capacity of present day network infrastructure. There do exist special purpose multimedia communication systems, such as those used for video-conferencing, but their cost and complexity separates them from the common mass of computing support. If, however, distributed multimedia systems can be realized, many possibilities for enhanced communication and more effective access to information exist. The key to this new generation of information systems is integration, bringing the power of multimedia display to the users in their normal working environment and effectively breaking down many of the barriers implicit in geographical distribution. Now that significant computing power is available on the desktop, integration of voice and video is the next major step forward. These integrated systems represent a very large market for components and for integrating expertise. It will probably be the largest single growth area for new IT applications over the next ten years. A coordinated set of components, conforming to a common architectural model with agreed interface standards, is required to allow the research and development of prototypes for new applications and to progress smoothly to the delivery of complete multimedia distributed systems. T9000 transputers, DS-Links and C104 routers provide a cost-effective platform on which this infrastructure can be built. Appendices [ Appendices.ps.Z - 428921 bytes ] Appendix A: New link cable connector C.P.H. Walker This appendix describes a connector that will assist standardization of transputer link connections. Appendix B: Link waveforms C.P.H. Walker This appendix shows waveforms of signals transmitted through cable and fibre. Appendix C: DS-Link Electrical specification R. Francis This appendix gives detailed electrical parameters of DS-Links. Appendix D: An Equivalent circuit for DS-Link Output Pads R. Francis This appendix gives an equivalent circuit for the DS-Link output pads. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: djb1@ukc.ac.uk Newsgroups: comp.parallel,comp.sys.transputer,comp.parallel.pvm Subject: [LONG] Transputer, occam and parallel computing archive: NEW FILES Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: Loads more files. See ADMIN article too for other info. Keywords: transputer, occam, parallel, archive Sender: news@ukc.ac.uk This is the new files list for the Transputer, occam and parallel computing archive. Please consult the accompanying article for administrative information and the various ways to access the files. [For experts: ftp to unix.hensa.ac.uk and look in /parallel] Dave NEW FEATURES ~~~~~~~~~~~~ * "Networks, Routers and Transputers" book - see detailed listing below * INMOS Preliminary Datasheets for the C101 and C104 DS-Link processors * FULL TEXT INDEX A nightly full text index is now being generated, of all the individual Index files. This is probably the best way to find something by 'grepping' the file although it is very large. /parallel/index/FullIndex.ascii 223327 bytes /parallel/index/FullIndex.ascii.Z 74931 bytes (compressed) /parallel/index/FullIndex.ascii.gz 52024 bytes (gzipped) NEW FILES since 28th October 1993 (newest first) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /parallel/books/ios/nrat/ "Networks, Routers & Transputers" book as compressed PostScript edited by M.D. May, P.W. Thompson and P.H. Welch. /parallel/books/ios/nrat/Overview Overview of the book - READ THIS FIRST and pick a chapter /parallel/books/ios/nrat/Introduction.ps.Z Introduction /parallel/books/ios/nrat/Chapter1.ps.Z Transputers and Routers: Components for Concurrent Machines /parallel/books/ios/nrat/Chapter2.ps.Z The T9000 Communications Architecture /parallel/books/ios/nrat/Chapter3.ps.Z DS-Links and C104 Routers /parallel/books/ios/nrat/Chapter4.ps.Z Connecting DS-Links /parallel/books/ios/nrat/Chapter5.ps.Z Using Links for System Control /parallel/books/ios/nrat/Chapter6a.ps.Z /parallel/books/ios/nrat/Chapter6b.ps.Z Models of DS-Link Performance Parts 1 and 2 /parallel/books/ios/nrat/Chapter7.ps.Z Performance of C104 Networks /parallel/books/ios/nrat/Chapter8.ps.Z General Purpose Parallel Computers /parallel/books/ios/nrat/Chapter9.ps.Z The Impl. of Large Par. Database Machines on T9000 and C104 Nets /parallel/books/ios/nrat/Chapter10a.ps.Z /parallel/books/ios/nrat/Chapter10b.ps.Z /parallel/books/ios/nrat/Chapter10c.ps.Z /parallel/books/ios/nrat/Chapter10d.ps.Z A Generic Architecture for ATM Systems Parts 1 to 4 /parallel/books/ios/nrat/Chapter11.ps.Z An Enabling Infr. for a Distributed Multimedia Industry /parallel/books/ios/nrat/Appendices.ps.Z Appendices A to D 12th November 1994 [ Added Working Group documents and papers and reports for the ] [ IEEE Draft Std P1355 - Standard for Heterogeneous InterConnect (HIC) ] [ (INMOS's DS Links) as well as datasheets for IMS C101 and IMS C104 ] [ These are the documents on the INMOS UK ftp site, cleaned up ] /parallel/documents/inmos/ieee-hic/copper.ps.Z Long Distance Differential Transmission of DS Links over Copper Cable by Stefan Haas, Xinjian Liu and Brian Martin of CERN. 34 pages. 981195 bytes uncompressed. /parallel/documents/inmos/ieee-hic/data/C101.ps.Z IMS C101 parallel DS-Link adaptor - Preliminary Datasheet 52 pages. 1679579 bytes uncompressed. /parallel/documents/inmos/ieee-hic/data/C104.ps.Z IMS C104 packet routing switch - Preliminary Datasheet 58 pages. 5834421 bytes uncompressed. /parallel/documents/inmos/ieee-hic/draftd0.0.ps.Z IEEE Draft Std P1355 - Standard for Heterogeneous InterConnect (HIC) (Low Cost Low Latency Scalable Serial Interconnect for Parallel System Construction) of 7th October 1993. Version D0.0 80+ pages. 2306156 bytes uncompressed. /parallel/documents/inmos/ieee-hic/fiber.ps.Z GP-MIMD T9000 Fiber Optic Link Extensions - Report on 850nm Fiber Optic Transceiver by Stefan Haas of ECP Division, CERN. 22 pages. 417516 bytes uncompressed. /parallel/documents/inmos/ieee-hic/pressrel.txt Press release about IEEE P1355 Working Group. July 1993. /parallel/documents/inmos/ieee-hic/roster.txt Roster of Working Group members /parallel/documents/inmos/ieee-hic/wg19oct.txt Working group minutes of 19th October 1993 /parallel/documents/inmos/ieee-hic/wg1sep.txt Working group minutes of 1st September 1993 /parallel/documents/inmos/ieee-hic/wg22jun.txt Working group minutes of 22nd June 1993 9th November 1993 /parallel/journals/jcse-par-alg-arch Call for papers for the Journal of Computer and Software Engineering Special Issue on Parallel Algorithms and Architectures to be published around January 1995. Deadline: 1st May 1994. /parallel/conferences/ieee-workshop-par-dist-simulations Call for papers for ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation being held from 6th-8th July 1994 at the University of Edinburgh, Scotland, UK. Deadlines: Papers: 1st December 1993; Notification: 1st March 1994; Camera-ready copy: 15th April 1994. /parallel/faqs/linux-and-transputers Summary of current state of Linux and transputers by Michael Haardt <(michael)u31b3hs@pool.informatik.rwth-aachen.de>, author of the assembler and server package. /parallel/courses/applications-of-par-programming Applications for High Performance Computers 14th-17th December 1993 given by Telmat Informatique in Soultz, France. Deadline for registration: 6th December 1993. /parallel/conferences/raps-workshop-par-benchmarks Call for attendance for the RAPS Open Workshop on Parallel Benchmarks and Programming Models being held from 7th-8th December 1994 at Chilworth Manor Conference Centre, Southampton, UK. /parallel/conferences/massive-parallelism Call for papers for 2nd International Workshop on Massive Parallelism: Hardware, Software and Applications being held from 3rd-7th October 1994 and organized by: Istituto di Cibernetica, Naples, Italy. Deadlines: Manuscripts: 1st February 1994; Notification: 31st April 1994; Camera-ready papers: 31st May 1994. /parallel/books/mit/concurrent-oo.announce Announcement of book: "Research Directions in Concurrent OOP" edited by Gul Agha, Peter Wegner and Aki Yonezawa. /parallel/conferences/podc94 Call for papers for 1994 ACM Symposium on Principles of Distributed Computing (PODC) being held from 14th-17th August 1994 at Los Angeles, California, USA. Deadlines: Abstracts: 4th February 1994; Acceptance: 15th April 1994; Camera-ready copy: 10th May 1994. /parallel/software/linux/assembler/server.tz /parallel/software/linux/assembler/asld.tz Updated versions. [ Added release notes for Mentat 2.6. Note: No binary distribution ] [ (executables) is included here due to the licensing restrictions. It ] [ must be obtained from the source site directly and not distributed from ] [ your site. See README for details. ] /parallel/software/virginia/mentat/README.2.6 New features in version 2.6 /parallel/software/virginia/mentat/Release26_notes.ps.Z Release notes for version 2.6 8th November 1993 /parallel/conferences/ipps94-parallel-io-workshop Call for papers for the 2nd Annual Workshop on I/O In Parallel Computer Systems being held at IPPS (International Parallel Processing Symposium) 94. The workshop will be held on 26th April 1994 at Hotel Regina, Cancun, Mexico. Deadlines: Paper: 31st January 1994; Notification: 15th March 1994; Camera-ready copy: 7th April 1994. /parallel/documents/vendors/cray/cray-cs64000.announcement Announcement of the CRAY CS64000 by Cray Research, Inc. /parallel/software/announcements/pablo Details of PABLO: a system for the collection, display, and analysis of parallel program performance data developed by Prof. Daniel A. Reed's research group at the University of Illinois at Urbana-Champaign, USA. /parallel/faqs/classification-of-parallel-algorithms Summary of responses to a query about classifying parallel algorithms by Marion Wittmann /parallel/conferences/hicss28-par-and-dist-computing Call for Minitrack Propsals for the Software Technology Track focussing on Parallel and Distributed Computing: Theory, Systems and Applications at HICSS-28 (28th Hawaii International Conference on System Sciences) being held at Maui, Hawaii, USA from 3rd-6th January 1995. [A minitrack is either a half day or a full day of technical sessions. All sessions are conducted in a workshop-like setting and participants often participate in several different tracks] Deadlines: Proposals: 26th November 1993; Notification: 31st December 1993. /parallel/conferences/int-conf-par-processing-1994 Call for papers for the 1994 International Conference on Parallel Processing (23rd Annual Conference) being held from 15th-19th August 1994 at The Pennsylvania State University, Illinois, USA. Deadlines: PAPERS: 10th January 1994; Acceptance: 20th March 1994; TUTORIALS: 1st March 1994. [ Updated WoTUG, NATUG and WoTUG/Japan user group details: ] /parallel/user-groups/transputer-user-groups Contact addresses for known world transputer user groups /parallel/user-groups/natug/committee.doc North American Transputer User Group (NATUG) committee /parallel/user-groups/wotug/committee.doc WoTUG committee members details /parallel/user-groups/wotug/constitution.doc The new constitution of WoTUG, approved at the Sheffield meeting and first AGM. /parallel/user-groups/wotug/sig-chairs.doc WoTUG Special Interest Group (SIG) chairs /parallel/user-groups/wotug/minutes/1993-03-31 Minutes of the first World occam and Transputer User Group (WoTUG) Annual General Meeting (AGM) held during the 16th WoTUG Technical Meeting, on 31 March 1993 at Earnshaw Hall, University of Sheffield, UK. Minutes Secretary: Julie Clarke (University of Sheffield) /parallel/user-groups/wotug-japan/committee.doc World occam and Transputer User Group Japan committee /parallel/papers/twente Papers from researchesas at the Mechatronics Research Centre Twente and Control Laboratory, Department of Electrical Engineering, University of Twente, Netherlands. /parallel/papers/twente/fft.ps.Z "A Generalized FFT algorithm on transputers" by Herman Roebbers, University of Twente, The Netherlands; Peter Welch, University of Kent at Canterbury, UK; Klaas Wijbrans, Van Rietschoten & Houwens, The Netherlands. ABSTRACT: "A Generalized alogrithm has been derived for the execution of the Cooley-Tukey FFT algorithm on a distributed memory machine. This algorithm is based on an approach that combines a large number of butterfly operations into one large process per processor. The performance can be predicted from theory. The actual algorithm has been implemented on a transputer array, and the performance of the implementation has been measured for various sizes of the complex input vector. It is shown that the algorithm scales linearly with the number of transputers and the problem size." /parallel/papers/twente/linxback.ps.Z "The Twente LINX backplane" by M.H. Schwirtz, K.C.J. Wijbrans, A.W.P. Bakkers, E.P. Hoogzaad and R.Bruis of the Mechatronics Research Centre Twente and Control Laboratory, Department of Electrical Engineering, University of Twente, Netherlands. ABSTRACT: "The design of a control system is not finished with the derivation of the necessary control algorithms. When the controller is implemented in a digital computer, the system designed has to schedule all control and calculation tasks within the sampling interval of the system. Higher sampling frequencies often improve system performance. On the other hand, more sophisticated control algorthims require more computing time thus reducing the obtainable sampling frequencies. Therefore, it is important to minimise the overhead of sampling and communications. This paper describes a transputer-baseed I/O system fulfilling this requirement and shows how the sampling with this system is done." /parallel/papers/twente/pga-kernel.ps.Z "Post-Game Analysis on Transputers - Development of a Measurement Kernel" by J.P.E Sunter, E.C. Koenders and A.W.P. Bakkers, Mechatronics Research Centre Twente and Control Laboratory, Department of Electrical Engineering, University of Twente, Enschede, Netherlands. ABSTRACT: "Int this paper Post-game analysis, a method for alloation of processes on an arbitrary network of processors, is investigated. Contrary to other methods, this method is not based on a-prior information. To generate allocations it uses heuristics and measurements obtained during program execution using a previous allocation. This has the advantage that it is not influenced by inaccuracies in a-priori information. This methods needs a few iterations to come to a good allocation. If the number of iterations is small, this method is a good repalcement for the computationally intensive deterministic methods". /parallel/papers/twente/pga-performance.ps.Z "Performance of Post-Game Analysis on Transputers" by J.P.E. Sunter and A.W.P. Bakkers, Mechatronics Research Centre Twente and Control Laboratory, Department of Electrical Engineering, University of Twente, Enschede, Netherlands. ABSTRACT: "In this paper the performance of a Post-Game Analysis system is studied. For this purpose several simple cases have been designed. These consist of a number of processes which have to be distributed over a network of transputers. The final distributions for these cases are compared to the optimal nes, which can be easily derived for these simple cases. Performance measures consider include the number of iterations required to reach the final distribution, and the overhead caused by monitoring the program behaviour." /parallel/papers/twente/prio-sched.ps.Z "Cooperative Priority Scheduling in Occam" by J.P.E. Sunter, K.C.J. Wijbrans and A.W.P. Bakkers, Mechatronics Research Centre Twente, Electrical Engineering Department, University of Twente, Enschede, Netherlands. ABSTRACT: "In this paper a scheduler for variable priority scheduling is presented. This scheduler assumes that the processes being scheduled cooperate with the scheduler. This cooperation introduces some latencey in the scheduling of the processes. Analytic experssions describing the effect of this latency are dervied. A variable priority scheduler was implemented and results from actual program executions are given. This results show that the scheduler can be used to schedule algorithms with simple sequential processes with sample frequencies not higher than 2 kHz." /parallel/papers/twente/virtual.ps.Z "Virtual Channel Generator - VCG" by Johan P.E. Sunter and A.W.P. Bakkers, Mechatronics Research Centre Twente, University of Twente, Enschede, Netherlands. ABSTRACT: "This paper deals with a novel way of implementing communication layers for transputer networks. In the past, the limitation of the number of links of a transputer to four has led to the development of many network layers and operating system kernels that provide topology independent routing. Programming multitransputer systems is facilitated this way, by providing a transparent communications service through kernel calls or special communication channels. Because these layers are designed as operating systems or library functions, the complete layer is always added to the transputer system. Thus the layers introduce considerable overhead for applications using irregularly sized data transfer, a high communication/computation ratio or large packet sizes. Especially for real-time applications this is not acceptable. This paper describes a different approach that combines design-time flexibility with run-time efficiency. Instead of always adding the same general purpose kernel for the provision of a transparent communications service, a dedicated kernel is generated for each transputer. This is possible, because a-priori knowledge can be extracted at compile time from the application processes that are loaded onto each transputer. This knowledged consists of the communication size on the channels and of the the toplogy of the application process. The network generator, the Virtual Channel Generator (VCG), is sufficiently smart to recognize situations in which it is not necessary to add network processes. As a result, the network layer is optimially adapted to the requirements of the specfic application." /parallel/papers/twente/while-sched.ps.Z "List Scheduling While Loops on Transputers" by Johan P.E. Sunter and A.W.P. Bakkers, Mechatronics Research Centre Twente, University of Twente, Enschede, Netherlands. ABSTRACT: "List scheduling is a well known tool for scheduling sequential programs on parallel machines. However, these sequential programs are not allowed to contain loops. In this paper this restriction is removed. A list scheduler is presented which allows the sequential programs to contain nested while loops." /parallel/conferences/wotug17 Call for papers for the 17th World occam and Transputer User Group (WoTUG) Technical Meeting being held from 11th-13th April 1994 at the University of Bristol, UK. Deadlines: Extended abstracts: 11th November 1993; Notification: Mid December 1993; Camera ready copy: 10th January 1994. 2nd November 1993 OTHER HIGHLIGHTS ~~~~~~~~~~~~~~~~ * occam 3 REFERENCE MANUAL (draft) /parallel/documents/occam/manual3.ps.Z By Geoff Barrett of INMOS - freely distributable but copyrighted by INMOS and is a full 203 page book in the same style of the Prentice Hall occam 2 reference manual. Thanks a lot to Geoff and INMOS for releasing this. * TRANSPUTER COMMUNICATIONS (WoTUG JOURNAL) FILES /parallel/journals/Wiley/trcom/example1.tex /parallel/journals/Wiley/trcom/example2.tex /parallel/journals/Wiley/trcom/trcom.bst /parallel/journals/Wiley/trcom/trcom01.sty /parallel/journals/Wiley/trcom/trcom02.sty /parallel/journals/Wiley/trcom/trcom02a.sty /parallel/journals/Wiley/trcom/transputer-communications.cfp /parallel/journals/Wiley/trcom/Index /parallel/journals/Wiley/trcom/epsfig.sty LaTeX (.sty) and BibTeX (.bst) style files and examples of use for the forthcoming Wiley journal - Transputer Communications, organised by the World occam and Transputer User Group (WoTUG). See transputer-communications.cfp for details on how to submit a paper. * FOLDING EDITORS: origami, folding micro emacs /parallel/software/folding-editors/fue-original.tar.Z /parallel/software/folding-editors/fue-ukc.tar.Z /parallel/software/folding-editors/origami.zip /parallel/software/folding-editors/origami.tar.Z Two folding editors - origami and folding micro-emacs traditionally used for occam programming environments due to the indenting rules. Origami is an updated version of the folding editor distribution as improved by Johan Sunter of Twente, Netherlands. fue* are the original and UKC improved versions of folding micro-emacs. * T9000 SYSTEMS WORKSHOP REPORTS /parallel/reports/wotug/T9000-systems-workshop/* The reports from the T9000 Systems Workshop held at the University of Kent at Canterbury in October 1992. It contains ASCII versions of the slides given then with the permission of the speakers from INMOS. Thanks to Peter Thompson and Roger Shepherd for this. Subjects explained include the communications architecture and low-level communications, the processor pipeline and grouper, the memory system and how errors are handled. * THE PETER WELCH PAPERS /parallel/papers/ukc/peter-welch Eleven papers by Professor Peter Welch and others of the Parallel Processing Group at the Computing Laboratory, University of Kent at Canterbury, England related to occam, the Transputer and other things. Peter is Chairman of the World occam and Transputer User Group (WoTUG) * ISERVERS /parallel/software/inmos/iservers Many versions of the iserver- the normal version, one for Windows (WIserver), one for etherneted PCs (PCServer) and one for Meiko hardware. * MIRROR OF PARLIB /parallel/parlib Mirror of the PARLIB archive maintained by Steve Stevenson, the moderator of the USENET group comp.parallel. * UKC REPORTS /pub/misc/ukc.reports The internal reports of the University of Kent at Canterbury Computing Laboratory. Many of these contain parallel computing research. * NETLIB FILES /netlib/p4 /netlib/pvm /netlib/pvm3 /netlib/picl /netlib/paragraph /netlib/maspar As part of the general unix.hensa.ac.uk archive, there is a full mirror of the netlib files for the above packages (and the others too). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dmj@comlab.ox.ac.uk (David Jackson) Subject: Refinement Checking Tool for CSP Available Organization: Oxford University Computing Laboratory, UK Formal Systems (Europe) is pleased to announce the availaility of academic licences for its FDR refinement checking tool. The academic release is made possible by support from the US ONR. The distribution is available to educational institutions for a nominal media charge -- Formal Systems makes no profit from this release. The latest version (FDR 1.3) is now available. New features include X-windows interactive interface support Support for off-line and remote refinement checks Improved type-checker and parser More extensive debugging information Existing FDR users can obtain updates via FTP from the address given below, which can be used in conjunction with their existing LICENCE files. A N N O U N C I N G F D R ============================= FDR (standing for Failures Divergence Refinement) is a tool for proving properties of CSP programs. It is a product of Formal Systems (Europe) Ltd. It deals with a flexible syntax based on the CSP notation presented in Hoare's text. Support is also provided for handling the numbers, sets, sequences, etc, frequently used in processes' states and data-components of events. Developed for industrial applications, for example at Inmos where it is used to develop and verify communications hardware, it is now made available at nominal cost to academic institutions thanks to support from the US Office of Naval Research (ONR). FDR brings CSP to life for teaching, and provides an excellent tool for developing, debugging and finally proving programs written in it. It is supplied with a library of commented examples, including the dining philosophers, a train set, a voting system, and several communication protocols including the alternating bit and sliding window protocols. These examples are currently available by anonymous FTP, to enable potential users to see a sample of what FDR can do. A copy of the FDR manual, which contains a description of the syntax supported, can be obtained in the same way. Obtaining Information About FDR =============================== Information about the FDR system, including up-to-date versions of the manual, may be obtained via anonymous ftp from "ftp.comlab.ox.ac.uk", IP number "192.76.25.2". The files are stored in "/pub/Packages/FDR/public.info/" (See the end of this announcement for further details). Comments, suggestions, and questions about FDR should be addressed to "fdr-request@comlab.ox.ac.uk" Technical Details ================= The theory of refinement in CSP allows most correctness conditions (safety and liveness, but not fairness in the current version) to be encoded as the most nondeterministic process satisfying them. We can test whether an implementation meets the specification by deciding if it refines the specification process. FDR is therefore built around this decision question. It implements a normalisation procedure for the specification process, which represents the specification in a form where the implementation can be checked against it by model-checking techniques (exploring reachable states). Both the specification and the implementation must therefore be finite-state processes. The theoretical result that the normal form of a CSP process can be exponential in the number of states in the input has never proved problematic except in deliberately created pathological examples. In the majority of cases, checking is linear in the number of states in the operational semantics of the implementation. The system is currently built as a Standard ML program, which calls routines in C to perform the normalisation and model-checking functions. * * * * * * * * * * * * * The size of problem it can deal with depends on the amount of memory (both physical and swap) available on the machine you are using. It is able to make effective use of virtual memory in the model checking phase, making the number of states dealt with proportional to this. A few timings are given below based on a Sparc2 with 32Mb of physical memory and approximately 75Mb of swap space. They are all based on the example files, as indicated. File Spec Imp States Setup Check1(e/cpu) Check2Quick phils DF BSYSTEM 5151 35 30/20 9/7 trains RLIVE RSYSTEM 4260 25 20/14 7/4 swp1 SPEC SYSTEM 20480 230 121/113 12/11 swp2 OSPEC DC 117888 300 21m35s/21m8s 210/204 swp2 SPEC SYSTEM 919680 300 **** 29m/28m11s "File" is the example file from which the problem is taken. "Spec" and "Imp" are the processes from those files checked. "States" is the number of states visited in the model-checking phase. "Setup" is the approximate elapsed time taken by the SML front end to convert the already-parsed CSP programs into the compiled form used by the refinement checking programs. This time depends on the sum of the number of states in the low-level component processes that make up the specification and implemententation, and also the complexity of their definitions. If multiple use is made of definitions during a session, the caching of FDR allows much of this time to be saved for checks after the first. "Check1" gives the elapsed time for the rest of the refinement process, and cpu time for the model-checking phase, using one version of the refinement checker -- one which performs an explicit divergence check. "Check2Quick" gives the corresponding times for a version of the checker that does not test for divergence in the implementation, and therefore assumes absence of divergence. All times given are in seconds except where indicated. It can be seen that, for small problems, the setup phase tends to dominate the processing time, whereas for problems with large numbers of states it is the model-checking phase. * * * * * * * * * * * * * A subsequent version of the system will support various techniques for avoiding the explicit exploration of state spaces (which typically grow exponentially with the number of processes in a parallel system). This will allow, for example, the deadlock analysis of N dining philosophers in time linear in N, rather than the current exponential time. * * * * * * * * * * * * * Distribution and Support ======================== The software currently runs only on SUN Sparc systems (and clones), though alternative platforms are being investigated. 16Mb of physical memory is required by the system, and its performance will improve with more. Systems running FDR should have at least 60 Mb of virtual memory; those wishing to use it on large problems more. Formal Systems is planning to support other hardware (including PC-type systems) and various operating systems. Possible platforms include: Intel 3/486 based systems running OS/2, Linux, or BSD386 Apple Macintosh A/UX or Mach Ten systems IBM RS6000 AIX systems If you are interested in these or any other developments, please contact Formal Systems at the address below. The system is available to bona fide educational institutions for education and research use (i.e., non-profit) only for the nominal price detailed below. Formal Systems reserves the right to refuse this offer to any party at its absolute discretion. If you are unsure whether your institution will qualify for this offer, please write or FAX as indicated below with details. The system may be used by an unlimited number of users from a single installation at one site. A licence file is supplied which may not be copied except for backup, and which must be present for the software to run. This enables us to support the system by FTP: users will be supplied with details of how to receive updated versions of FDR. For the time being this service is supported by the ONR and free to users under this academic release. The software is supplied on 2 SUN `bar' format 3.5in disks, with a further disk containing the licence. An alternative to using the `bar' disks is to load the system less licence via FTP, and then install using only the licence disk. The software is available on DC6150 Tapes at additional cost. One copy of the manual, which may be copied, is supplied with the distribution, as are installation instructions. * * * * * * * * To obtain your copy of the system, please print out the following form, fill it in and post with cheque to Formal Systems (ref FDR) 3 Alfred Street Oxford OX1 4EH UK Enquiries from individuals or bodies who do not qualify under the terms of this offer are welcome. Please write to the above address, send a FAX to [+44] (0)865 201114, or telephone [+44] (0)865 728460 and ask for Dave Jackson or Michael Goldsmith. --------8<------------8<------------8<------------8<------------8<-------- FDR Academic Licence: Application form ====================================== Name of Institution: Site where software to be used: (above information will appear in licence) Address for delivery: Contact name/email address: Distribution required: Standard (bar disks): 200 pounds [ ] or 400 US dollars [ ] DC6150 Tape : 250 pounds [ ] or 500 US dollars [ ] VAT: Customers in the UK should add VAT at the prevailing rate. A VAT invoice will be sent with the software. Customers who are (i) within the European Community but outside the UK and (ii) are ordering subsequent to 1 December 1992 should either add VAT at the UK rate (17.5%) or provide their VAT number. The former will apply to customers who are not VAT registered, and customers who are VAT registered should normally do the second. This is necessary because of the VAT regulations connected with the Single European Market. Cheque or International Money Order should be sent with order. The price includes postage, but any customs or other dues payable will be the responsibility of the purchaser. --------8<------------8<------------8<------------8<------------8<-------- The above prices apply to academic institutions only, and use of the software is limited to education and non-commercial research. The licence issued will be non-transferrable. Obtaining Information via Anonymous FTP ======================================= Having called "ftp", you should proceed as follows: ftp> open ftp.comlab.ox.ac.uk 220-ftp.comlab.ox.ac.uk FTP server (Version 4.696 Tue Jun 9) ready. 220 Anonymous retrieval login with userid: `anonymous'; Name (ftp.comlab.ox.ac.uk:user): enter "anonymous" 331 Guest login ok, give your email address for password. Password: enter your email address: e.g. "user@comlab.ox.ac.uk". 230-Guest `user@comlab.ox.ac.uk' login ok, access restrictions apply. 230-Local time is now Fri Oct 23 18:00:36 1992 230-We have special access features, see file /README 230 It was last updated Tue Sep 15 09:24:58 1992 - 38.4 days ago ftp> enter "bin" (in case you're picking up any binary files: e.g., *.dvi) 200 Type set to I. ftp> enter "cd /pub/Packages/FDR/public.info/manual" 250 CWD command successful. ftp> enter "mget *" and type "y" to receive each file mget manual.ps? y 200 PORT command successful. 150 Opening BINARY mode data connection for /pub/Packages/FDR/public.info/manual /manual.ps (492098 bytes). 226 Transfer complete. local: manual.ps remote: manual.ps 492098 bytes received in 3.6 seconds (1.3e+02 Kbytes/s) ftp> to get example files, you should then enter "cd /pub/Packages/FDR/public.info/examples" ftp> 250 CWD command successful. ftp> enter "mget *" mget 00.examples? y 200 PORT command successful. 150 Opening BINARY mode data connection for /pub/Packages/FDR/public.info/examples/00.examples (3229 bytes). 226 Transfer complete. local: 00.examples remote: 00.examples 3229 bytes received in 0.37 seconds (8.6 Kbytes/s) mget abp.csp? y 200 PORT command successful. ... etc. ftp> quit Problems with FTP? ================== Note that if the server fails to recognise your address, the following message will be displayed: 230-Guest `no_such_user@mars' login ok, access restrictions apply. 230-Hmm.. You didn't give your complete email address for password, - Please, next time give your Internet email address. - Example: Example.Name@pierrot.comlab.ox.ac.uk 230-Local time is now Fri Oct 23 18:19:54 1992 -Your IP number didn't reverse, and/or password wasn't ok as email address. -Read /CAPTIVE-README for further information! 230-We have special access features, see file /README 230 It was last updated Tue Sep 15 09:24:58 1992 - 38.4 days ago and you will unable to retrieve files. Should this occur, close the connection by typing "close" and try again. ============================================================================ E-mail facilities of the University of Oxford used by their courtesy. Trademarks are acknowledged as the property of their respective owners. -- David M. Jackson - Research Student - Oxford Univ. Programming Research Group David.Jackson@prg.ox.ac.uk 11 Keble Rd, Oxford, UK. Tel.+44-865-273846 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jabf@festival.ed.ac.uk (J Blair-Fish) Subject: APPLICATIONS OF PARALLEL PROCESSING IN GEOSCIENCE Message-ID: Organization: Edinburgh University Date: Fri, 12 Nov 1993 14:56:17 GMT EUROPEAN GEOPHYSICAL SOCIETY XIX GENERAL ASSEMBLY GRENOBLE, 25-29 APRIL 1994 **************** CALL FOR PAPERS ******************* Society Symposium: Session EGS2 Title: APPLICATIONS OF PARALLEL PROCESSING IN GEOSCIENCE This session encourages presentations of applications of any aspect of parallel processing covering the disciplines in EGS. Hardware configurations may range from for example a few interconnected workstations to massively parallel dedicated machines. The aim is cross fertilisation of ideas and techniques involving the use, benefits and pitfalls of parallelisation for problem-solving and may include consideration of the relative merits of porting existing code or writing anew. For further details (e.g. Young Scientist and East European Awards, registration details etc) contact the Convener:- Dr. B.A.Hobbs Department of Geology and Geophysics University of Edinburgh West Mains Road Edinburgh EH9 3JW UK Tel: (44)-31-650-4906 Fax: (44)-31-668-3184 email: bah@uk.ac.ed.castle ****** DEADLINE FOR RECEIPT OF ABSTRACTS 1 JANUARY 1994 ******** DEADLINE FOR YOUNG SCIENTIST AND EAST EUROPEAN AWARDS - 15 DECEMBER 1993 ************************************************************************ -- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: Need introductory parallel programming book Organization: Professional Student, University of Maryland, College Park References: <1993Nov11.213044.26194@hubcap.clemson.edu> Nntp-Posting-Host: coke.eng.umd.edu Originator: dbader@coke.eng.umd.edu In article <1993Nov11.213044.26194@hubcap.clemson.edu>, cychong@magnus.acs.ohio-state.edu (Robert Chong) writes: > > Can anyone suggest an introductory and language independent >book which can teach me to write parallel programs/algorithms? I highly recommend these texts as an introduction to parallel algorithms: @book{JaJa, author = {Joseph~J\'{a}J\'{a}}, address = {New York}, publisher = {Addison-Wesley Publishing Company}, title = {{An Introduction to Parallel Algorithms}}, year = {1992} } @book{Akl, author = {Selim G. Akl}, address = {Englewood Cliffs, NJ}, publisher = {Prentice-Hall}, title = {The Design and Analysis of Parallel Algorithms}, year = {1989} } @book{Quinn, author = {Michael J. Quinn}, address = {New York}, publisher = {McGraw-Hill}, title = {Designing Efficient Algorithms for Parallel Computers}, year = {1987} } -david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: yali@ecn.purdue.edu (Yan Alexander Li) Subject: nCUBE tflop machine Sender: news@ecn.purdue.edu (USENET news) Organization: Purdue University Engineering Computer Network I was told that nCUBE announced a tera FLOP machine due in a couple years. Does anyone have more information on this? Thanks, Alex Li Graduate Assistant 1285 Electrical Engineering Purdue University West Lafayette, IN 47907 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rlarowe@chpc.org (Rick LaRowe) Subject: Technical Report on Galactica Net Organization: Center For High Perf. Computing of WPI; Marlboro Ma Date: Fri, 12 Nov 1993 20:10:52 GMT Apparently-To: comp-parallel@uunet.uu.net Announcing the availability of the following technical report: Update Propagation in the Galactica Net Distributed Shared Memory Architecture A. Wilson, R. LaRowe, R. Ionta, R. Valentino, B. Hu, P. Breton, and P. Lau CHPC TR 93-007 Abstract Galactica Net employs a unique approach to supporting shared memory coherence in a distributed computing system. The approach is based upon the use of operating system software for the complex management of sharing information, but provides hardware support for the mnost important coherence operations. The goal is to delegate as much of the complexity to software as possible, while still exploiting hardware to ensure high performance. In this paper, we describe the Galactica Net update-based coherence protocol and the hardware required to implement that protocol. We also present basic performance data obtained through detailed hardware simulations. The report is available via anonymous ftp from chpc.chpc.org, in the directory pub/gnet. The report is available as a compressed postscript file called gnet_updates.ps.Z. Rick LaRowe -- Center for High Performance Computing internet: rlarowe@chpc.org Worcester Polytechnic Institute rlarowe@wpi.edu Suite 170 phone: (508) 624-7400 x610 Marlborough, MA 01752 (508) 624-6354 (fax) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bottac@rpi.edu (Carlo L Bottasso) Subject: GMRES Date: 12 Nov 1993 20:38:45 GMT Organization: Rensselaer Polytechnic Institute, Troy, NY. Reply-To: bottac@rpi.edu Nntp-Posting-Host: rebecca.its.rpi.edu Keywords: GMRES, parallel, message passing I am looking for a parallel GMRES subroutine based on message passing. Can you tell me if such a subroutine is available ? It should be used on an IBM SP1 for a CFD finite element program. If you have any information, please let me know. Carlo L. Bottasso Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: krishnan@cs.concordia.ca (RADHAKRISHNAN t.) Subject: Call For Papers Message-ID: Sender: usenet@newsflash.concordia.ca (USENET News System) Nntp-Posting-Host: lily.cs.concordia.ca Organization: Computer Science, Concordia University, Montreal, Quebec Date: Fri, 12 Nov 1993 21:41:54 GMT ANNOUNCEMENT AND CALL FOR PAPERS INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND EDUCATION 22 June 1994 - 25 June 1994 Indian Institute of Science Bangalore 560 012 Prof.V.Rajaraman is widely recognized as a pioneer of Computer Education in India. An eminent educationist and researcher, he is an author of over thirteen popular computer science textbooks. Prof. Rajaraman has been responsible for initiating and nurturing several academic programmes in computer science, setting up advanced computing facilities in several institutions, and has made significant contributions towards the formulation of national policies on computers, electronics and education. This International Conference is being organized in honor of Prof.V.Rajaraman on the eve of his retirement from the Indian Institute of Science, Bangalore, with the cooperation and participation of several noted researchers from many countries. It will be held at the Indian Institute of Science, Bangalore, India. The two day conference consists of key note talks and contributed research papers followed by a one-day Workshop wherein researchers attending the conference shall present the state of art research in their areas of work related to the theme of the conference. Four half-day pre-conference tutorials will be held on 22nd June. This conference is being sponsored by the Bangalore section of IEEE and CSI as well as other Institutions. Conference Theme The conference theme broadly covers high performance computer architectures and their software environments, Information systems and Computer Science Education. Topics for submission of original papers include, but are not limited to: - High Performance Parallel and Distributed Computing. - Knowledge Based Systems. - Information Systems. - Computer Aided Design for VLSI. - Supercomputing and Scientific Visualization. - Computer Science Education. Submission Details Interested researchers and scientists are invited to submit papers of their original research, not exceeding twelve (12) A4 size pages in 11pt double spaced two-column format using either LaTeX or Wordstar (Version 4). LaTeX header files are available on request. The submission should have clear details of research contributions to the field. Please send four (4) copies to one of the contact addresses given below. The submissions will be reviewed and the selected papers will be included in the conference proceedings. The selected (and revised) papers have to be submitted both in print as well as in source form (in a DOS or Unix formatted floppy or by email). The proceedings will be brought out through a reputed publisher and will be distributed during the conference. Submission Deadlines Full Papers Due: February 1, 1994 Acceptance Notification: March 1 1994 Camera Ready Papers Due: May 1, 1994 Pre-conference Tutorials Four half-day pre-conference tutorials on the theme areas of the conference will be held on the 22nd June,'94 in two parallel sessions. Details will be announced later. Researchers desirous of offering tutorials in the theme areas of the conference are invited to contact Prof.N.Balakrishnan before November 30, '93 with their detailed proposals. Registration All Indian participants will pay a registration fee of Rs 1000 for the conference and Rs 500 for the subsequent workshop. All foreign participants will pay a registration fee of US $50 for the conference and US $25 for the workshop. All students will get a 50% discount in the registration fee. Registration fee for each tutorial is Rs 300 (or US $20 for the foreign participants). Registration closes on April 30, 1994. The registration fee can be paid through a cheque (or demand draft) drawn in favour of "Chairman, Organizing Committee, ICCSE, Bangalore 560 012, India" and mailed to one of the contact addresses. The registration form is enclosed. Contact Addresses Send the full papers as well as for further details contact either one of: T.Radhakrishnan Computer Science Dept. Concordia University 1455 De Maisonneuve Montreal, CANADA H3G 1M8 Email: krishnan@cs.concordia.ca Fax: 514 848 2830 Tel: 514 848 3019 OR N.Balakrishnan Supercomputer Education & Research Centre Indian Institute of Science Bangalore - 560 012 INDIA Email: shikshak@vidya.iisc.ernet.in Fax: 00 91 80 346 648 Tel: 00 91 80 346 325 Program Committee T.Radhakrishnan, Concordia University, Canada (Chairman) Aggarwala,T., IBM, Austin, Texas Arvind, MIT, USA Ananda, A. L., National University, Singapore Bhat,P.C.P., McGill University Biswas, S., IIT Kanpur Bode, A.,Tech. University Munich Burhart,H., University of Basel, Switzerland Chandrashekar, M., University of Waterlloo, Canada Chandra,A., IBM, New York Ganapathy,S., AT&T, New Jersey Gairola, B.K., DOE, New Delhi Govindarajalu,R., REC Warangal, AP Goyal,P., Automation Corporation, Mass, USA Gupta,R.K., Cray Research, Wisconsin, USA Ibramsha,M., Crescent Engg College, Madras Jayaraj,C., S.R.Steel, Bombay Kapoor,D., SUNY, Albany, USA Krishnamoorthy,M.S., RPI, New York Lindstrom,G., University Utah Marwadel,P., University of Dartmund, Germany Misra,J., University of Texas, Austin Moona, R., IIT Kanpur Nori, K.V., TRDDC, Poona Om Vikas, Indian Embassy, Tokyo, Japan Perrot,R., University of Belfast, U.K. Rao,P. R. K., IIT Kanpur Sahni,S., University of Central Florida, USA Sahasrabuddhe,H. V., University of Poona Sankar,S., Stanford University, USA Sivaramamoorthy,C., IIT Madras Sivakumar,M., PES Engg College, Mandya, Karnataka Srinivasan,B., University of Monash, Australia Srivastava,J., University of Minnesota, USA Vaishnavi,V.K., Georgia Tech, USA Organizing Committee N. Balakrishnan, SERC, IISc (Chairman) Muthukrishnan, C.R., IIT Madras (Co-Chairman) Khincha,H.P., IEEE, (Bangalore) Rao,N.J., CEDT, IISc Reddy,V.U., ECE, IISc Sonde,B.S., ECE, IISc Srinivasan,R., CSI (Bangalore) Venkatesh,Y.V., EE, IISc Viswanatham,N., CSA, IISc Treasurers D.Sampath, SERC, IISc. T.B.Rajasekhar, NCSI, IISc. Tutorials H.Krishnamurthy, SERC, IISc. Publicity R.Krishnamurthy, SERC, IISc. Registration S.Sundaram, SERC, IISc. Panels T.S.Mohan, SERC, IISc. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stsai@scws1.harvard.edu (Shun-Chang Tsai) Subject: Re: The Future of Parallel Computing Organization: Harvard University, Cambridge, Massachusetts References: <1993Nov11.142504.9124@hubcap.clemson.edu> Nntp-Posting-Host: scws1.harvard.edu edsr!jlb@uunet.UU.NET (Jeff Buchmiller) writes: >But PLEASE don't forget about portability. If the same code can be compiled >onto multiple architectures, it will make the programmer's job MUCH MUCH >easier. (Tune to new architecture, instead of rewrite from scratch.) Well, first, before we worry about portability, we should worry about setting a standard for some parallel language. I know of at least 10+ languagese used on different machines, most of which are supersets of existing standardized languages (The addendum essentially deals with the "parallel" part). Before we can even do this, we need to come up with a very abstract and general way to represent a parallel machine of any architecture. (I don't suppose there has been much research in paralle automatons and parallel Turing machines? ;) (just blabbing about my problem set)). We have to be able to describe the machines so well that to a programmer, the machine is simply a black box. (Well, non-system programmers anyway). OK. So that means we need a standard OS and a standard compiler.... True, right now most parallel machines run some derivative of UNIX. But just take some system specific codes (i.e., codes that can do lotsa real work) and try to compile it on a different derivative of UNIX. (Well, I guess you can say the same about workstation UNIXes, but they seem to be more standardized than MPP UNIXes, although decidedly there are fewer flavors of MPP UNIXes than workstation UNIXes.) Someone also needs to develop a complete mathematical theory of parallel computing (Hehe, "complete" in the non-Godel sense) in order to develop bug-free and totally-portable algorithms. Kevin S.-C. Tsai, who's looking for a summer job in supercomputing. ;) stsai@husc.harvard.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: scott@cs.rochester.edu (Michael Scott) Subject: Re: barrier synchronisation Keywords: barrier ksr Organization: University of Rochester Computer Science Department References: <1993Nov4.152918.24903@hubcap.clemson.edu> In article <1993Nov4.152918.24903@hubcap.clemson.edu>, Kerry Guise wrote: | I'm wondering if anyone can help me with the barrier synchronisation | functions which are extensions of the KSR implementation of the pthreads | library. I'm trying to port a program written for the KSR to Solaris 2.x | and I've run up against this barrier :-). How easy would it be to | implement my own barrier synchronisation mechanism using standard tools | such as mutexes, cv's etc ? Can anyone suggest some papers I could read on | the subject ? Most any parallel machine is likely to have barrier routines in its parallel programming library. I'm not sure what Sun provides. Writing your own barrier is easy. You don't even need atomic operations other than read and write. There are lots of papers on the subject. One that John Mellor-Crummey and I wrote contains a survey of the main alternatives, and cites many other papers, so I'd suggest you start there: %A John M. Mellor-Crummey %A Michael L. Scott %T Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors %J ACM TOCS %V 9 %N 1 %P 21-65 %D February 1991 %X Earlier version published as TR 342, University of Rochester Computer Science Department, April 1990, and COMP TR90-114, Center for Research on Parallel Computation, Rice University, May 1990. The one possible glitch is that KSR's barrier implementation has an unusual interface that can be used in non-standard ways. If you always call pthread_barrier_checkout and pthread_barrier_checkin back-to-back, you're safe -- that give the effect that everybody else is used to. ----------------------- Michael L. Scott Computer Science Dept. (716) 275-7745, 5478 University of Rochester FAX 461-2018 Rochester, NY 14627-0226 scott@cs.rochester.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: conpar94@gup.uni-linz.ac.at (Siegfried Grabner,GUP-Linz) Subject: CFP: CONPAR 94 - VAPP VI Nntp-Posting-Host: athena.gup.uni-linz.ac.at Reply-To: conpar94@gup.uni-linz.ac.at Organization: GUP Linz, University Linz, AUSTRIA CONPAR 94 - VAPP VI Johannes Kepler University of Linz, Austria September 6-8, 1994 First Announcement and Call For Papers The past decade has seen the emergence of two highly successful series of CONPAR and of VAPP conferences on the subject of parallel processing. The Vector and Parallel Processors in Computational Sciene meetings were held in Chester (VAPP I, 1981), Oxford (VAPP II, 1984), and Liverpool (VAPP III, 1987). The International Conferences on Parallel Processing took place in Erlangen (CONPAR 81), Aachen (CONPAR 86) and Manchester (CONPAR 88). In 1990 the two series joined together and the CONPAR 90 - VAPP IV con ference was organized in Zurich. CONPAR 92 - VAPP V took place in Lyon, France. The next event in the series, CONPAR 94 - VAPP VI, will be organized in 1994 at the University of Linz (Austria) from September 6 to 8, 1994. The format of the joint meeting will follow the pattern set by its predecessors. It is intended to review hardware and architecture developments together with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms and applications software on vector and parallel architectures. It is expected that the program will cover: * languages / software tools * automatic parallelization and mapping * hardware / architecture * performance analysis * algorithms * applications * models / semantics * paradigms for concurrency * testing and debugging * portability A special session will be organized on Parallel Symbolic Computation. The proceedings of the CONPAR 94 - VAPP VI conference are intended to be published in the Lecture Notes in Computer Science series by Springer Verlag. This conference is organized by GUP-Linz in cooperation with RISC-Linz, ACPC and IFSR. Support by GI-PARS, OCG, OGI, IFIP WG10.3, IEEE, ACM, AFCET, CNRS, C3, BCS-PPSG, SIG and other organizations is being negotiated. Schedule: Second Announcement and Final Call for Papers October 1993 Submission of complete papers and tuturials Feb 15 1994 Notification of acceptance May 1 1994 Final (camera-ready) version of accepted papers July 1 1994 Paper submittance: Contributors are invited to send five copies of a full paper not exceeding 15 double-spaced pages in English to the program committee chairman at: CONPAR 94 - VAPP VI c/o Prof. B. Buchberger Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Phone: +43 7236 3231 41, Fax: +43 7236 3231 30 Email: conpar94@risc.uni-linz.ac.at The title page should contain a 100 word abstract and five specific keywords. CONPAR/VAPP also accepts and explicitly encourages submission by electronic mail to conpar94@risc.uni-linz.ac.at. Submitted files must be either * in uuencoded (preferably compressed) DVI format or * in uuencoded (preferably compressed) Postscript format as created on most Unix systems by cat paper.dvi | compress | uuencode paper.dvi.Z > paper.uue Organising committee: Conference Chairman: Prof. Jens Volkert Honorary Chairman: Prof. Wolfgang Handler Program Chairman: Prof. Bruno Buchberger Members: Siegfried Grabner, Wolfgang Schreiner Conference Address: University of Linz, Dept. of Computer Graphics and Parallel Processing (GUP-Linz), Altenbergerstr. 69, A-4040 Linz, Austria Tel.: +43-732-2468-887 (885), Fax.: +43-732-2468-10 Email: conpar94@gup.uni-linz.ac.at Provisional program committee: Chairman: Buchberger B. (A) Burkhart H. (CH), Cosnard M. (F), Delves L.M. (UK), Ffitch J. (UK), Haring G. (A), Hong H. (A), Jesshope Ch. (UK), Jordan H.F. (USA), Kaltofen E. (USA)., Kleinert W. (A), Kuchlin W. (D), Parkinson D. (UK), Miola A. (I), Mirenkov N. (J), Muraoka Y. (J), Reinartz K.D. (D), Steinhauser O. (A), Wait R. (UK), Wang P. (USA)., Zinterhof P. (A) Reply Form: We encourage you to reply via e-mail, giving us the information listed below. If you do not have the possibility to use e-mail, please copy the form below and send it to the conference address. CONPAR 94 - VAPP VI Reply Form Name:...................................First Name................Title......... Institution:.................................................................... Address:........................................................................ Telephone:.....................Fax:............................E-Mail:.......... Intentions (please check appropriate boxes) o I expect to attend the conference o I wish to present a paper o I wish to present at the exhibition (industrial / academic) ------------------------------------------------------------------------------ Siegfried GRABNER Tel: ++43-732-2468-884 (887) Dept. for Graphics and Parallel Processing Fax: ++43-732-2468-10 (GUP-Linz) Johannes Kepler University Email: Altenbergerstr.69, A-4040 Linz,Austria/Europe conpar94@gup.uni-linz.ac.at ------------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jx@cs.brown.edu (Jian Xu) Subject: Sharing hotel room at 5th IEEE SPDP, Dallas, TX Keywords: SPDP Sender: news@cs.brown.edu Organization: Brown University Department of Computer Science I am a male graduate student and I am looking for a person to share the hotel room at 5th IEEE Symposium on Parallel and Distributed Processing held on Dec. 1-4 in Dallas, TX. If you are interested, please email me at jx@cs.brown.edu. Jian Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sgarg@egr.duke.edu (Sachin Garg) Newsgroups: comp.parallel,comp.theory Subject: Performance evaluation of distributed algorithms Organization: Duke University EE Dept.; Durham, NC Hi everyone-- In performance evaluation of distributed algorithms, people generally talk in terms of time and message complexity. However, in some models of distributed computations (CSP, CCS), which were developed largly for correctness verification, time has been incorporated to obtain "finishing time of a program". It is either a probabilistic study, in which case an expected finishing time is reported or a deterministic study in which "upper or lower bounds" on the performance metric are reported. I am looking for refrences to all such work. Also, if you know of (1) any other models of distributed algorithms/programs that address both the correctness and the performance part, (2) Probabilistic modelling methods/tools used to model dis. algo. (3) work on fault/reliability modelling of distributed algorithms. please pass the info through email to: sgarg@markov.ee.duke.edu I will post a summary/list of refrences. Thanks in advance. Sachin Garg -- Sachin Garg Box 90291 Dept. of Electrical Engg. Duke University Durham, North Carolina 27708-0291 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: elwin@media.mit.edu (Lee W Campbell) Subject: Re: INMOS Information available on T9000 Sender: news@media.mit.edu (USENET News System) Organization: MIT Media Laboratory References: <1993Nov11.213059.26255@hubcap.clemson.edu> Date: Sat, 13 Nov 1993 21:27:57 GMT Apparently-To: comp-parallel@uunet.uu.net In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: > Some information regarding the IMS T9000 transputer is now available on the > INMOS FTP server, ftp.inmos.co.uk [192.26.234.3], in the directory > /inmos/info/T9000. This relates mainly to the superscalar processor and > the cache memory system at present. For the fun of it I checked it out. What was available was a Q&A and a press release. I got the press release. It's kind of amusing: 26 March, 1993. London, UK. INMOS announces sample availability of the T9000 transputer. Just three years after design commenced, ... Now I *know* I was hearing about this chip in the summer of '91 and possibly earlier. Does this mean that Inmos announced the part at the same time as they commenced design?? In any case, I think that delay of 2.5 years from announcement to sampling has got to set a record in the microprocessor biz. The T9000 is the world's fastest single-chip computer, with its 200 MIPS, 25 MFLOPS peak performance and its 32-bit superscalar integer processor, 64-bit floating point unit, virtual channel processor, 100Mbits/s communications ... World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, R4000, supersparc, Pentium, and roughly comperable to a fast '486, so how in the hell do they manage to call it "fastest'??? -- Often in error; Never in Doubt! elwin@media.mit.edu 617-253-0381 Lee Campbell MIT Media Lab Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: linear@budada.eecs.umich.edu (linear) Subject: parallel benchmarks Date: 14 Nov 1993 21:14:36 GMT Organization: University of Michigan Engineering, Ann Arbor Reply-To: linear@budada.eecs.umich.edu (Hsien-Hsin Lee) Nntp-Posting-Host: budada.eecs.umich.edu Is there any free parallel benchmarks available on anonymous ftp sites that I can access ? I'll do some experiments for comparing some loop scheduling strategies to analyze the data affinity, load balancing, synchronization and communication overhead. Thanx. HHL linear@eecs.umich.edu True today, true yesterday, true tomorrow That's my definition of classics. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jimh@netcom.com (James Hsin) Subject: need references on massively parallel architecture Organization: Netcom - Online Communication Services (408 241-9760 guest) I'm working on a paper on massively parallel architectures and am at a loss for reference material. I would really appreciate it if someone could point me in the general direction of some technical information. Thanks. James Hsin jimh@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on TMC CM-5, Intel iPSC/860, Intel Paragon Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: boxer.nas.nasa.gov Organization: NAS/NASA-Ames Research Center Date: Mon, 15 Nov 1993 08:00:17 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov J. Eric Townsend (jet@nas.nasa.gov) last updated: 3 Nov 1993 (corrected admin/managers list info) This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-managers -- discussion of administrating the TMC CM-5 cm5-users -- " " using the TMC CM-5 ipsc-managers -- " " administrating the Intel iPSC/860 ipsc-users -- " " using the Intel iPSC/860 paragon-managers -- " " administrating the Intel Paragon paragon-users -- " " using the Intel Paragon The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@boxer.nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@boxer.nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@boxer.nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edwin@dutind2.twi.tudelft.nl (Edwin Vollebregt) Subject: Re: The Future of Parallel Computing Sender: news@dutiws.twi.tudelft.nl (TWI News Administration) Organization: Delft University of Technology References: <1993Nov11.142504.9124@hubcap.clemson.edu> Date: Mon, 15 Nov 1993 09:10:44 GMT Apparently-To: comp-parallel@NL.net Hello This thread about the future of parallel computation has been going on for a while. I have heard many good arguments coming along. Let me give my opinion, where I borrow a lot from the latest replies that I saved. Excuse me for not knowing who said what, and excuse me if I explain words precisely how you NOT meant them. I think the discussion was started as follows: > We are now in an age when the high performance machines have > various data network topologies, i.e. meshes, torii, linear arrays, > vector processors, hypercubes, fat-trees, switching networks, etc.. > etc.. These parallel machines might all have sexy architectures, but > we are headed in the wrong direction if we don't take a step back and > look at the future of our work. We shouldn't have to rewrite our ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > algorithms from scratch each time our vendor sells us the latest > hardware with amazing benchmarks. Very good point. IMHO, the low portability is partially the result of a lack of uniform programming model, tools, languages and communication interfaces. > My hope is that, eventually, people will be able to switch from MPP A > to MPP B merely by recompiling all of their code (perhaps with a few > changes of #include files or predefined constants). This might be possible to some extent when standard interfaces arise. However, I think that it cannot be achieved completely, as I'll explain below. > In order for this to happen, though, the community must realize that a > loss of a few percent of performance in exchange for portability and ease > of coding and maintenance is acceptable for MPPs. This argument has also been given by others: > The goal of compiling for parallel code should NOT necessarily be "the > best possible code;" it should be "reasonably close to the best possible > code." In future, when parallelizing compilers are much better than nowadays, we can expect that only ``a few percent'' of performance is lost by relying on them. We now come to my contribution to the discussion. My point is that in scientific computing, there is much freedom in choosing an algorithm. There are many ways to calculate a result. Furthermore, there are large differences in parallelism between algorithms. Some algorithms are well suited for parallel processing, others not. And no compiler can ever do anything about that. Finally there is a large influence of architecture on parallelism. Thus the parallelism in an algorithm possibly cannot be exploited on MPP A or MPP B. I read part of this point also in other replies: > A computational scientist has to know his computer sufficiently well > to make it produce results (nearly) as efficiently as possible. The > scientist will have to push his methods (i.e., codes) whenever he > acquires a new hot box. ... > I believe there is no excuse for ignoring the hardware you use for > scientific computing. ... > A corollary is that "black-box" usage of codes or compilers in scientific > computing will often be poor use of resources. A computational scientist must know which algorithm is suitable for his architecture, and should tune his code to the architecture. Thus I disagree with: > A few replies to this thread have taken the line that scientific > computing is quite happy to have to 'streamline code', or in other ^^^^^^^^^^^^^ > words hack routines in assembly language. ^^^^^^^ Streamline code IMHO means ``use an algorithm that is suitable for the architecture at hand''. In conclusion: > I think the time has come for software engineering to catch up with > hardware engineering in the parallel computing world. There is much need for standards. Programming languages, communication libraries, memory models, .. There is also much need for parallelizing compilers that (semi-) automatically distribute tasks, generate communi- cation statements, and that are quite good in getting the best performance out of an algorithm on a specific architecture. Finally, comutational scientists should realize which parts of their algorithms are not well suited for an architecture. In the design of new applications, they should realize which parts are most likely to change when a new architecture becomes available, and should keep these parts well separated from other parts of the program. > Please let me know what you think, Edwin _________________________________________________________________________ | | | | | Ir. Edwin A.H. Vollebregt | Section of Applied Mathematics | ,==. | | | Delft University of Technology | /@ | | | phone +31(0)15-785805 | Mekelweg 4 | /_ < | | edwin@pa.twi.tudelft.nl | 2628 CD Delft | =" `g' | | | The Netherlands | | |____________________________|_________________________________|__________| Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: spitz@irb.uni-hannover.de (Jan Spitzkowsky) Subject: parallel simulation Summary: parallel simulation, pessimistic - optimistic Keywords: pdes, pessimistic simulation, optimistic simulation Sender: news@newsserver.rrzn.uni-hannover.de (News Service) Organization: IRB Uni-Hannover, Germany Date: Mon, 15 Nov 1993 11:10:00 GMT Apparently-To: hypercube@hubcap.clemson.edu hello, i'm new to this group and i don't know whether there is a more specific one for simulation in parallel. i am looking for parallel discrete event simulators. does anybody work with a pdes-simulator or have some expierience with it? i am looking for simulators with pessimistic (conservative) and/or optimistic algorithms. thanks, jan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jabf@festival.ed.ac.uk Newsgroups: comp.parallel From: jabf@festival.ed.ac.uk (J Blair-Fish) Subject: General Purpose Computing Message-ID: Organization: Edinburgh University Date: Mon, 15 Nov 1993 11:44:39 GMT The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing A One Day Open Meeting with Invited and Contributed Papers 22 December 1993, University of Westminster, London, UK Invited speakers : Les Valiant, Harvard University Bill McColl, PRG, University of Oxford, UK David May, Inmos, UK A key factor for the growth of parallel computing is the availability of port- able software. To be portable, software must be written to a model of machine performance with universal applicability. Software providers must be able to provide programs whose performance will scale with machine and application size according to agreed principles. This environment presupposes a model of paral- lel performance, and one which will perform well for irregular as well as regu- lar patterns of interaction. Adoption of a common model by machine architects, algorithm & language designers and programmers is a precondition for general purpose parallel computing. Valiant's Bulk Synchronous Parallel (BSP) model provides a bridge between appli- cation, language design and architecture for parallel computers. BSP is of the same nature for parallel computing as the Von Neumann model is for sequential computing. It forms the focus of a project for scalable performance parallel architectures supporting architecture independent software. The model and its implications for hardware and software design will be described in invited and contributed talks. The PPSG, founded in 1986, exists to foster development of parallel architec- tures, languages and applications & to disseminate information on parallel pro- cessing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact ei- ther of the following : Chair : Mr. A. Gupta Membership Secretary: Dr. N. Tucker Philips Research Labs, Crossoak Lane, Paradis Consultants, East Berriow, Redhill, Surrey, RH1 5HA, UK Berriow Bridge, North Hill, Nr. Launceston, gupta@prl.philips.co.uk Cornwall, PL15 7NL, UK Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing 22 December 1993, Fyvie Hall, 309 Regent Street, University of Westminster, London, UK Provisional Programme 9 am-10 am Registration & Coffee L. Valiant, Harvard University, "Title to be announced" W. McColl, Oxford University, Programming models for General Purpose Parallel Computing A. Chin, King's College, London University, Locality of Reference in Bulk-Synchronous Parallel Computation P. Thannisch et al, Edinburgh University, Exponential Processor Requirements for Optimal Schedules in Architecture with Locality Lunch D. May, Inmos "Title to be announced" R. Miller, Oxford University, A Library for Bulk Synchronous Parallel Programming C. Jesshope et al, Surrey University, BSPC and the N-Computer Tea/Coffee P. Dew et al, Leeds University, Scalable Parallel Computing using the XPRAM model S. Turner et al, Exeter University, Portability and Parallelism with `Lightweight P4' N. Kalentery et al, University of Westminster, From BSP to a Virtual Von Neumann Machine R. Bisseling, Utrecht University, Scientific Computing on Bulk Synchronous Parallel Architectures B. Thompson et al, University College of Swansea, Equational Specification of Synchronous Concurrent Algorithms and Architectures 5.30 pm Close Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on General Purpose Parallel Computing, London, December 22 1993, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch, refreshments and proceedings) are (in pounds sterling) : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: _________________ Date: _________________ Day time telephone: ________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone 0734 665570) -- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: sjnh@panix.com (Steven Hargreaves) Newsgroups: comp.parallel,comp.parallel.pvm Subject: Need Information and Job Candidates Date: 15 Nov 1993 08:10:54 -0500 Organization: PANIX Public Access Internet and Unix, NYC Summary: Looking for suppliers and people to help build finance application Keywords: Finance Banking Simulation Portfolio Application I am looking at the feasibility of using parallel techniques in a project for a financial institution. The system would support portfolio management. The technical approach is not fixed, but I'd like to hear from software vendors selling (or developing) tools for parallel processing on multi-cpu Intel systems or across networked Unix boxes or PCs running Windows and potentially /NT. C language is preferred. We plan to recruit an individual to work on the technical aspects of the project. The skills profile will be: demonstrated ability to develop parallel systems; sound understanding of both theoretical and practical issues; intellectual strength coupled with a "can do" approach that values deliverables as well as elegance. Compensation will be competitive; location is Wall St.. We have started gathering resumes. Mail me with an outline if this sounds interesting. Regards, Steven Hargreavs sjnh@panix.com To: comp-parallel@mcnc.org Path: news.duke.edu!sgarg From: sgarg@egr.duke.edu (Sachin Garg) Newsgroups: comp.os.research Subject: Performance evaluation of distributed algorithms Date: 13 Nov 93 18:16:22 GMT Sender: news@cs.duke.edu Followup-To: comp.parallel Organization: Duke University EE Dept.; Durham, NC Nntp-Posting-Host: markov.ee.duke.edu [This is a cross post from comp.parallel] Hi everyone-- In performance evaluation of distributed algorithms, people generally talk in terms of time and message complexity. However, in some models of distributed computations (CSP, CCS), which were developed largly for correctness verification, time has been incorporated to obtain "finishing time of a program". It is either a probabilistic study, in which case an expected finishing time is reported or a deterministic study in which "upper or lower bounds" on the performance metric are reported. I am looking for refrences to all such work. Also, if you know of (1) any other models of distributed algorithms/programs that address both the correctness and the performance part, (2) Probabilistic modelling methods/tools used to model dis. algo. (3) work on fault/reliability modelling of distributed algorithms. please pass the info through email to: sgarg@markov.ee.duke.edu I will post a summary/list of refrences. Thanks in advance. Sachin Garg ------------------------------------------------------------------------------ sgarg@markov.ee.duke.edu Box 90291 Duke University Durham, NC 27708 ph: (919) 660 5230 ------------------------------------------------------------------------------ -- ------------------------------------------------------------------------------ Sachin Garg Box 90291 Dept. of Electrical Engg. Duke University Durham, North Carolina 27708-0291 Date: Mon, 15 Nov 93 11:44:43 GMT From: jabf@festival.ed.ac.uk To: comp-parallel@britain.eu.net Newsgroups: news.announce.conferences,comp.theory,comp.arch,comp.sys.super,uk.announce,uk.events,uk.bcs.announce,comp.benchmarks,comp.sys.sequent,comp.sys.alliant,comp.sys.encore,comp.sys.sun.hardware,comp.sys.hp,comp.sys.dec,comp.sys.sgi.hardware,eunet.misc,comp.simulation,sci.math.num-analysis,comp.theory.cell-automata,comp.theory.dynamic-sys,comp.theory.self-org-sys,comp.ai,sci.physics,sci.electronics,comp.arch.storage Path: jabf From: jabf@festival.ed.ac.uk (J Blair-Fish) Subject: General Purpose Computing Message-ID: Organization: Edinburgh University Date: Mon, 15 Nov 1993 11:44:39 GMT The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing A One Day Open Meeting with Invited and Contributed Papers 22 December 1993, University of Westminster, London, UK Invited speakers : Les Valiant, Harvard University Bill McColl, PRG, University of Oxford, UK David May, Inmos, UK A key factor for the growth of parallel computing is the availability of port- able software. To be portable, software must be written to a model of machine performance with universal applicability. Software providers must be able to provide programs whose performance will scale with machine and application size according to agreed principles. This environment presupposes a model of paral- lel performance, and one which will perform well for irregular as well as regu- lar patterns of interaction. Adoption of a common model by machine architects, algorithm & language designers and programmers is a precondition for general purpose parallel computing. Valiant's Bulk Synchronous Parallel (BSP) model provides a bridge between appli- cation, language design and architecture for parallel computers. BSP is of the same nature for parallel computing as the Von Neumann model is for sequential computing. It forms the focus of a project for scalable performance parallel architectures supporting architecture independent software. The model and its implications for hardware and software design will be described in invited and contributed talks. The PPSG, founded in 1986, exists to foster development of parallel architec- tures, languages and applications & to disseminate information on parallel pro- cessing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact ei- ther of the following : Chair : Mr. A. Gupta Membership Secretary: Dr. N. Tucker Philips Research Labs, Crossoak Lane, Paradis Consultants, East Berriow, Redhill, Surrey, RH1 5HA, UK Berriow Bridge, North Hill, Nr. Launceston, gupta@prl.philips.co.uk Cornwall, PL15 7NL, UK Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing 22 December 1993, Fyvie Hall, 309 Regent Street, University of Westminster, London, UK Provisional Programme 9 am-10 am Registration & Coffee L. Valiant, Harvard University, "Title to be announced" W. McColl, Oxford University, Programming models for General Purpose Parallel Computing A. Chin, King's College, London University, Locality of Reference in Bulk-Synchronous Parallel Computation P. Thannisch et al, Edinburgh University, Exponential Processor Requirements for Optimal Schedules in Architecture with Locality Lunch D. May, Inmos "Title to be announced" R. Miller, Oxford University, A Library for Bulk Synchronous Parallel Programming C. Jesshope et al, Surrey University, BSPC and the N-Computer Tea/Coffee P. Dew et al, Leeds University, Scalable Parallel Computing using the XPRAM model S. Turner et al, Exeter University, Portability and Parallelism with `Lightweight P4' N. Kalentery et al, University of Westminster, From BSP to a Virtual Von Neumann Machine R. Bisseling, Utrecht University, Scientific Computing on Bulk Synchronous Parallel Architectures B. Thompson et al, University College of Swansea, Equational Specification of Synchronous Concurrent Algorithms and Architectures 5.30 pm Close Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on General Purpose Parallel Computing, London, December 22 1993, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch, refreshments and proceedings) are (in pounds sterling) : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: _________________ Date: _________________ Day time telephone: ________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone 0734 665570) -- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: sgregory@cs.sunysb.edu (Stanley Gregory) Subject: Arch. for Solving Linear Equations Organization: State University of New York, Stony Brook Newsgroups: comp.arch,comp.parallel Hi. I need information on what the best design would be for a computer to solve linear equations. This theoretical machine can have its instruction set optimized for this purpose, and its performance should scale well as more processors are added. Specifically, I need to know where I can find information on processors like the one described in the previous paragraph (if any exist). Also, I would appreciate it if anyone could point me to where I could find LINPACK/LAPACK statistics for various architectures (and for ones optimized for solving linear equations). Thanks ahead of time to anyone who can help. Please reply via e-mail, as I do not get to check these newsgroups very often. -- |sgregor@ic.sunysb.edu | "...Could be a mirage. Looks rather like a mirage | |----------------------| I once saw -- if I saw it." - Lord Lambourn, from | |sgregory@cs.sunysb.edu| _Yellowbeard_ | -- |sgregor@ic.sunysb.edu | "...Could be a mirage. Looks rather like a mirage | |----------------------| I once saw -- if I saw it." - Lord Lambourn, from | |sgregory@cs.sunysb.edu| _Yellowbeard_ | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rehmann@cscs.ch (Rene M. Rehmann) Subject: Final CFP: IFIP WG10.3 conference on programming environments, Switzerland Keywords: CFP,working conference,massive parallelism, programming, tools Sender: usenet@cscs.ch (NEWS Manager) Nntp-Posting-Host: vevey.cscs.ch Reply-To: rehmann@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland FINAL CALL FOR PAPERS IFIP WG10.3 WORKING CONFERENCE ON PROGRAMMING ENVIRONMENTS FOR MASSIVELY PARALLEL DISTRIBUTED SYSTEMS April 25 - 30, 1994 Monte Verita, Ascona, Switzerland Massively parallel systems with distributed resources will play a very important role for the future of high performance computing. One of the current obstacles of these systems is their difficult programming. The proposed conference will bring together active researchers who are working on ways how to help programmers to exploit the performance potential of massively parallel systems. The working conference will consist of sessions for full and short papers, interleaved with poster and demonstration sessions. The Conference will be held April 25 - 30, 1994 at the Centro Stefano Franscini, located in the hills above Ascona at Lago Maggiore, in the southern part of Switzerland. It is organized by the Swiss Scientific Computing Center CSCS ETH Zurich. The conference is the forthcoming event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) on Programming Environments for Parallel Computing. The conference succeeds the 1992 Edinburgh conference on Programming Environments for Parallel Computing. SUBMISSION OF PAPERS Submission of papers is invited in the following areas: -- Programming models for parallel distributed computing -- Computational models for parallel distributed computing -- Program transformation tools -- Concepts and tools for the design of parallel distributed algorithms -- Reusability in parallel distributed programming -- Concepts and tools for debugging massively parallel systems (100+ processing nodes) -- Concepts and tools for performance monitoring of massively parallel systems (100+ processing nodes) -- Tools for application development on massively parallel systems -- Support for computational scientists: what do they really need ? -- Application libraries (e.g., BLAS, etc.) for parallel distributed systems: what do they really offer ? -- Problem solving environments for parallel distributed programming Authors are invited to submit complete, original, papers reflecting their current research results. All submitted papers will be refereed for quality and originality. The program committee reserves the right to accept a submission as a long, short, or poster presentation paper. The papers will be published in book-form. Manuscripts should be double spaced, should include an abstract, and should be limited to 5000 words (20 double spaced pages); The contact authors are requested to list e-mail addresses if available. Fax or electronic submissions will not be considered. Please submit 5 copies of the complete paper to the following address: PD Dr. Karsten M. Decker IFIP 94 CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland IMPORTANT DATES Deadline for submission: December 1, 1993 Notification of acceptance: February 1, 1994 Final versions: March 1, 1994 CONFERENCE CHAIR Karsten M. Decker CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8233 fax: +41 (91) 50 6711 e-mail: decker@serd.cscs.ch ORGANIZATION COMMITTEE CHAIR Rene M. Rehmann CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8234 fax: +41 (91) 50 6711 e-mail: rehmann@serd.cscs.ch PROGRAM COMMITTEE Francoise Andre, IRISA, France Thomas Bemmerl, Intel Corporation, Germany Arndt Bode, Technical University Muenchen, Germany Helmar Burkhart, University Basel, Switzerland Lyndon J. Clarke, University of Edinburgh, UK Michel Cosnard, Ecole Normale Superieure de Lyon, France Karsten M. Decker, CSCS-ETH Zurich, Switzerland Thomas Fahringer, University of Vienna, Austria Claude Girault, University P.et M. Curie Paris, France Anthony J. G. Hey, University of Southhampton, UK Roland N. Ibbett, University of Edinburgh, UK Nobuhiko Koike, NEC Corporation, Japan Peter B. Ladkin, University of Stirling, UK Juerg Nievergelt, ETH Zurich, Switzerland Edwin Paalvast, TNO-TPD, The Netherlands Gerard Reijns, Delft University of Technology, The Netherlands Eugen Schenfeld, NEC Research Institute, USA Clemens-August Thole, GMD, Germany Owen Thomas, Meiko, UK Marco Vanneschi, University of Pisa, Italy Francis Wray, Cambridge, UK MONTE VERITA, ASCONA, SWITZERLAND Centro Stefano Franscini, Monte Verita, located in the scenic hills above Ascona, with a beautiful view on Lago Maggiore, has excellent conference and housing facilities for about sixty participants. Monte Verita enjoys a sub-alpine/mediterranean climate with mean temperatures between 15 and 18 C in April. The closest airport to Centro Stefano Franscini is Lugano-Agno which is connected to Zurich, Geneva and Basle and many other cities in Europe by air. Centro Stefano Franscini can also be reached conveniently by train from any of the three major airports in Switzerland to Locarno by a few hours scenic trans-alpine train ride. It can also be reached from Milano in less than three hours. For more information, send email to ifip94@cscs.ch For a PostScript-version of the CFP, anon-ftp to: pobox.cscs.ch:/pub/SeRD/IFIP94/CALL_FOR_PAPERS.ps Karsten M. Decker, Rene M. Rehmann --- Rene Rehmann phone: +41 91 50 82 34 Section for Research and Development (SeRD) fax : +41 91 50 67 11 Swiss Scientific Computing Center CSCS email: rehmann@cscs.ch Via Cantonale, CH-6928 Manno, Switzerland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: msodhi@agsm.ucla.edu (Mohan Sodhi) Subject: Re: The Future of Parallel Computing Organization: UCLA Microcomputer Support Office References: <1993Nov9.163338.9165@hubcap.clemson.edu> <1993Nov10.132809.21164@hubcap.clemson.edu> Nntp-Posting-Host: risc.agsm.ucla.edu >Jeff> PLEASE don't forget about portability. If the same code can be >Jeff> compiled onto multiple architectures, it will make the >Jeff> programmer's job MUCH MUCH easier. (Tune to architecture as >Jeff> needed, instead of rewrite from scratch.) >I agree 100%. Look at how serial programs work -- under Unix and >derivatives, many programs don't need to be modified to compile for >different architectures. This is especially true of user applications >(as opposed to OS-type software). My hope is that, eventually, people >will be able to switch from MPP A to MPP B merely by recompiling all >of their code (perhaps with a few changes of #include files or >predefined constants). In order for this to happen, though, the >community must realize that a loss of a few percent of performance in >exchange for portability and ease of coding and maintenance is >acceptable for MPPs. The hopes of the authors' above are a little too pie-in-the sky. For one, even for serial computers, porting an application from one operating system to another can take _months_ and involve a lot of new code. Second, developing architecture-free algorithms does not mean *no new code* -- it just means no new math to be worked out. I do not think it is possible to have a program compile under different architectures (even if the algorithm is unchanged) with just a few compiler directives; I am not even sure this is desirable. One thing at a time: let us concentrate on architecture free algorithms for now (in my area, operations research, this itself is a very new concept!): this will take our minds off the tooth fairy who will write a program that will compile under every computer architecture and every operating system. Mohan Sodhi msodhi@agsm.ucla.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Murray Cole Subject: Re: The Future of Parallel Computing Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> Date: Mon, 15 Nov 1993 16:55:21 GMT Apparently-To: comp-parallel@uknet.ac.uk In article <1993Nov15.154454.14783@hubcap.clemson.edu>, dbader@eng.umd.edu (David Bader) writes: > You need to look at the "granularity" of a problem to decide whether > it will perform faster on a parallel machine. (For an introduction to > granularity, see Stone, "High Performance Computer Architecture", Section 6.2). > > If your machine is meant for course-grained problems (such as the case you > outline above), you will need to sum "n > N" numbers to see a speedup, where "N" > is some large threshold for the given algorithm and machine size. That's what I'm getting at. The algorithm is clear and simple, but the peculiarities of this machine or that machine mean that I can't be sure if its going to work well (even if the underlying network is right), without lifting the lid. This seems a little unfortunate. Murray. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Path: beer From: beer@ICSI.Berkeley.EDU (Joachim Beer) Subject: summer internships Date: 15 Nov 1993 23:55:44 GMT Organization: International Computer Science Institute, Berkeley, CA, U.S.A. Nntp-Posting-Host: icsib11.icsi.berkeley.edu The International Computer Science Institute (ICSI) is soliciting applications for their student summer internship program in Europe. The program is open to advanced graduate students in computer science at American universities. Applicants don't need to be U.S. citizens OR permanent residents of the U.S. to be eligible for the program. The selection process is soley based on merit and works roughly as follows: an application is submitted to ICSI where an initial selection takes place. ICSI does not have special application forms for the summer internship program. A cover letter stating the applicants intentions, transcripts, and one or two letters of recommendation is sufficient. It would also be very helpful if the applicant could provide a short proposal stating what he/she is interested in and the particular fields he/she would want to work in. The selected applications will be forwarded to those participating research labs that best match the applicants scientific interest and background. Depending on the applicants interest, background, and research proposal her/his application might be send to several of the research labs. It is the research labs that make the final decision. Current sponsor nations are Germany, Italy and Switzerland. ICSI is *not* able to support or process applications for internships in non-sponsor nations. Graduate students which have been invited by research labs in ICSI sponsor nations due to their own initiative or existing collaborations can apply for travel grants. However, ICSI will not be able to provide financial support beyond travel grants. Financial support provided by the hosting research lab is approximately $1800 per month for 3 month while ICSI provides travel grants up to $1500. Submit applications including at least one letter of recommendation, a list of completed course work, and a statement of intent to: International Computer Science Institute -Summer Internship Program 1947 Center Street, Suite 600 Berkley, CA 94704 ****************************** * * * DEADLINE March 1, 1994 * * * ****************************** Note: ICSI is only a clearinghouse for summer internship applications. ICSI is not able to answer question concerning specific research activities within participating research labs. In the past summer interns have been working in such areas as computer vision, expert systems, knowledge representation, natural language processing, software engineering, software tool development, etc.. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: forge@netcom.com (FORGE Customer Support) Subject: News from Applied Parallel Research Summary: Latest Developments in Parallization Tools from APR Organization: Applied Parallel Research, Inc. +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= The Latest News from Applied Parallel Research... +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+= November 1993 As we enter our third year, we are more excited than ever about the latest additions to our FORGE family of products, and the growing number of vendors and programmers who are using them. =-=-=-= MAGIC =-=-=-= At the top of our list of new things we want to tell you about are our new MAGIC batch parallelizers that we are announcing at Supercomputing 93 in Portland. FORGE Magic/DM Parallelizer (dpf) for distributed memory is able to automatically (automagically?) partition data arrays and distribute loops based upon a static analysis of the source program. Or, you can supply a serial timing profile to direct the automatic parallelization right to the hot spots in your code. With FORGE Magic/SM (spf) for shared memory systems, data arrays are automatically padded and aligned for optimal cache management, and DO loops are parallelized by target system compiler-specific directives. It would be outrageous of us to claim that our MAGIC technology can automatically produce the best parallelization strategy for all applications, and we won't. But the one important claim we do make is that it is an incredible way to get a first rough sketch at a parallelization. This may be especially useful with large, unwieldly codes when, most likely, you would not have a clue as to where to begin. A parallelization report shows you in great detail which loops parallelized and which data arrays were partitioned, and how this was done. More importantly, it shows which loops/arrays could not be parallelized and the inhibitors in the program that prevented this. An output option annotates the original Fortran 77 program with parallelization directives that you can amend to refine the parallelization. Our intention with these MAGIC parallelizing pre-compilers is to provide facilities similar to what we used to vectorize code not too long ago. Each can be used to generate instrumented programs for serial runtime execution timing. FORGE Magic/DM (dpf) can also instrument the generated parallelized code to produce parallel runtime performance profiles that identify communication bottlenecks and losses due to poor load balancing. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= High Performance Fortran, HPF =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Our new FORGE HPF batch pre-compiler, xhpf, is now available with an optional MAGIC automatic parallelization mode as well. xhpf with MAGIC is able to take a serial Fortran 77 program and automatically generate a parallelized code with Fortran 90 array syntax and HPF directives. Our HPF pre-compiler has a number of capabilities that may prove invaluable. For example, the HPF consistency checker assures that the parallelization directives you supply are legal HPF and are consistent with themselves and the program. Also the parallelization is viewable from our interactive FORGE/DMP Parallelizer through is compatible database. And, if your target system does not yet have an HPF compiler, xhpf, like dpf, will generate a SPMD Fortran 77 code with explicit message passing calls interfacing to PVM, Express, Linda, etc. You may know that other companies are struggling right now to provide HPF compilers on a number of systems. However, we can report the following regarding APRs HPF tools: * They are available today. * We generate efficient Fortran 77 code from the HPF that is immediately compilable and optimizable by most native f77 compilers. * We parallelize Fortran DO loops as well as Fortran 90 array syntax. (HPF compilers only parallelize array syntax.) * MAGIC on xhpf will generate an initial HPF parallelization automatically for you to start with. * You can review and analyze the parallelization with our FORGE/DMP interactive tool. * You can instrument the parallel code and obtain a parallel runtime performance profile that includes measurement of all communication costs and bottlenecks. * With our unique parallel runtime library, we can interface to all major multiprocessor systems and workstation clusters running PVM, Express, Linda, IBM EUI, nCUBE, Intel NT, you name it!. We welcome the opportunity to demonstrate our HPF capabilities to you if you give us a call. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= FORGE Motif GUI Fortran Explorer =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Another major development is the release of our first interactive product to utilize the Motif graphic user interface. FORGE Explorer, based upon the FORGE Baseline Browser, is now available on IBM RS/6000, DEC Alpha, and HP workstations supporting the Motif GUI. It presents an easy to use and mostly intuitive approach to interprocedural program data and control flow analysis, tracing, and global context searching. We are moving to transform all FORGE interactive products into the world of Motif by the end of next year. FORGE Explorer is actually fun to use... you've got to see it! =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Advanced Shared Memory Parallelizers =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= The last area of product development wed like to mention is the release of advanced, new shared memory parallelizers that are able to optimize program cache management by padding and alignment of global arrays automatically. We also have developed a Global Index Reordering (GIR) restructurer that can make a positive gain on performance by automatically reordering the indices of arrays in inner loops to eliminate inappropriate striding through memory. This restructuring, which is so tedious and error prone when attempted by hand, can render a successful parallelization out of a marginal performer. =-=-=-=-=-=-=-= Vendor Support =-=-=-=-=-=-=-= A growing number of supercomputing vendors are now actively supporting the ongoing development of APRs products for their shared and distributed memory multiprocessor systems: IBM (SP1, Power/4, and RS/6000 clusters), Fujitsu (VPP500), Intel (Paragon), nCUBE, HP, DEC, Cray Computers (Cray 3). We also offer our products directly to end users of SGI, Cray Research, and Convex systems. =-=-=-=-= Further! =-=-=-=-= We look forward to a new year of challenges to provide the ultimate tools for parallel processing, and hope to be speaking with you soon. If you will be attending SuperComputing 93 in Portland, stop by and say hello and catch a demonstration of these products -- we will be in booth 401. John Levesque, President, Applied Parallel Research, Inc. ...for the APR Staff Applied Parallel Research, Inc. 550 Main St., Suite I Placerville, CA 95667 916/621-1600 forge@netcom.com -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: forge@netcom.com (FORGE Customer Support) Subject: Automatic Parallelization News Release Summary: APR Announces Automatic Parallelization Tools for Fortran Keywords: Automatic Parallelization Fortran Organization: Applied Parallel Research, Inc. ..November 11, 1993 NEWS RELEASE Applied Parallel Research, Inc. 550 Main St., Suite I Placerville, CA 95667 Robert Enk, Sales and Marketing (301) 718-3733, Fax: (301) 718-3734 ----------------------------------------------------------------------- FOR IMMEDIATE RELEASE Applied Parallel Research announces the addition of two revolutionary products to the FORGE family of parallelization tools for Fortran and significant enhancements to its current set of products. Placerville, California, USA, November 11, 1993 -- Applied Parallel Research Inc. (APR) announces its MAGIC series of automatic parallelizing pre-compilers, FORGE Magic/DM for distributed memory systems and clustered workstations, and FORGE Magic/SM for shared memory parallel systems. Together these products represent the state-of-the-art in parallelization technology and take a giant step forward in providing development tools critical to the successful utilization of parallel processing systems. Magic/DM represents the first production quality automatic parallelization facility for distributed memory systems. Sophisticated interprocedural analysis allows Magic/DM to automatically identify the most significant loops within a program and to develop a parallelization strategy based upon those loops and the arrays they reference. On Initial benchmarks, Magic/DM has generated parallel code that achieves 80% of the performance obtained from hand parallelization. Optionally, Magic/DM can create an output file that is the original Fortran code with APR parallelization directives strategically embedded in the program. A detailed parallelization report is also available which describes for the programmer which arrays were partitioned and how the loops were parallelized, and, most importantly, indicates where parallelization could not be accomplished and what inhibitors are causing the problem. This output forms the basis of a first parallelization which the programmer can further refine through the use of parallel statistics gathering and APR Directives. Dan Anderson of the National Center for Atmospheric Research said, "This is just what our users need, a useable tool that not only parallelizes as much as possible, but also generates useful diagnostics that can be used to hand tune the application." Magic/SM is also an automatic batch parallelization tool but directed towards multi CPU shared memory systems. Magic/SM automatically analyzes candidate loops for parallelization and annotates the original program with the target systems compiler specific directives. It also produces a detailed parallelization report which can be used for further refinement of the parallelization. APR's HPF Compilation System, xHPF, has been enhanced to include an Auto Parallelization option. A user is now able to input a Fortran 77 program with optional sequential timing information to xHPF and generate a parallelized source file with Fortran 90 array syntax and HPF directives. This facility allows organizations that might standardize on HPF to convert their existing Fortran 77 programs to HPF without expensive and time consuming hand conversion. John Levesque, President of Applied Parallel Research said, "With the addition of these automatic parallelization products and enhancements, APR is able to offer the most complete and sophisticated set of Fortran parallelization tools in the industry. The FORGE Magic products provide the same ease of use for parallel computing systems that vectorizing compilers and pre-compilers have provided to users of vector machines. APR's combination of batch and interactive products can now address the needs of first time parallel system users as well as seasoned parallel programmers." APR's source code browser, FORGE Baseline has been enhanced and redesignated FORGE Explorer. FORGE Explorer is APR's first product to utilize the Motif graphic user interface and has been significantly restructured for ease of use in providing control flow information, variable usage and context sensitive query functions. Information on APR's product can be obtained by contacting Robert Enk, VP of Sales and Marketing at (301) 718-3733 or by E-mail at enk@netcom.com. -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edwin@dutind2.twi.tudelft.nl (Edwin Vollebregt) Subject: Re: The Future of Parallel Computing Organization: Delft University of Technology References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> Hello This thread about the future of parallel computation has been going on for a while. I have heard many good arguments coming along. Let me give my opinion, where I borrow a lot from the latest replies that I saved. Excuse me for not knowing who said what, and excuse me if I explain words precisely how you NOT meant them. I think the discussion was started as follows: > We are now in an age when the high performance machines have > various data network topologies, i.e. meshes, torii, linear arrays, > vector processors, hypercubes, fat-trees, switching networks, etc.. > etc.. These parallel machines might all have sexy architectures, but > we are headed in the wrong direction if we don't take a step back and > look at the future of our work. We shouldn't have to rewrite our ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > algorithms from scratch each time our vendor sells us the latest > hardware with amazing benchmarks. Very good point. IMHO, the low portability is partially the result of a lack of uniform programming model, tools, languages and communication interfaces. > My hope is that, eventually, people will be able to switch from MPP A > to MPP B merely by recompiling all of their code (perhaps with a few > changes of #include files or predefined constants). This might be possible to some extent when standard interfaces arise. However, I think that it cannot be achieved completely, as I'll explain below. > In order for this to happen, though, the community must realize that a > loss of a few percent of performance in exchange for portability and ease > of coding and maintenance is acceptable for MPPs. This argument has also been given by others: > The goal of compiling for parallel code should NOT necessarily be "the > best possible code;" it should be "reasonably close to the best possible > code." In future, when parallelizing compilers are much better than nowadays, we can expect that only a few percent of performance is lost by relying on them. We now come to my contribution to the discussion. My point is that in scientific computing, there is much freedom in choosing an algorithm. There are many ways to calculate a result. Furthermore, there are large differences in parallelism between algorithms. Some algorithms are well suited for parallel processing, others not. And no compiler can ever do anything about that. Finally there is a large influence of architecture on parallelism. Thus the parallelism in an algorithm possibly cannot be exploited on MPP A. I read part of this point also in other replies: > A computational scientist has to know his computer sufficiently well > to make it produce results (nearly) as efficiently as possible. The > scientist will have to push his methods (i.e., codes) whenever he > acquires a new hot box. ... > I believe there is no excuse for ignoring the hardware you use for > scientific computing. ... > A corollary is that "black-box" usage of codes or compilers in scientific > computing will often be poor use of resources. A computational scientist must know which algorithm is suitable for his architecture, and should tune his code to the architecture. Thus I disagree with: > A few replies to this thread have taken the line that scientific > computing is quite happy to have to 'streamline code', or in other ^^^^^^^^^^^^^ > words hack routines in assembly language. ^^^^^^^ Streamline code IMHO means ``use an algorithm that is suitable for the architecture at hand''. In conclusion: > I think the time has come for software engineering to catch up with > hardware engineering in the parallel computing world. There is much need for standards. Programming languages, communication libraries, memory models,.. There is also much need for parallelizing compilers that (semi-) automatically distribute tasks, generate communi- cation statements, and that are quite good in getting the best performance out of an algorithm on a specific architecture. Finally, comutational scientists should realize which parts of their algorithms are not well suited for an architecture. In the design of new applications, they should realize which parts are most likely to change when a new architecture becomes available, and should keep these parts well separated from other parts of the program. > Please let me know what you think, Edwin _________________________________________________________________________ | | | | | Ir. Edwin A.H. Vollebregt | Section of Applied Mathematics | ,==. | | | Delft University of Technology | /@ | | | phone +31(0)15-785805 | Mekelweg 4 | /_ < | | edwin@pa.twi.tudelft.nl | 2628 CD Delft | =" `g' | | | The Netherlands | | |____________________________|_________________________________|__________| Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: news.announce.conferences,comp.arch,comp.sys.super From: nakasima@kuis.kyoto-u.ac.jp (Hiroshi Nakashima) Subject: ICS'94: email address for architecture track submission Sender: news@kuis.kyoto-u.ac.jp Reply-To: nakasima@kuis.kyoto-u.ac.jp Organization: Dept. of Info. Sci., Kyoto Univ., JAPAN Dear (potential) Authors, In the CFP of ICS'94 (8th ACM International Conference on Supercomputing), it is announced that "tomita@kuis.kyoto-u.ac.jp" is the e-mail adddress for the submission of papers on architecture. However, we would like to ask you authors to send your papers to: ics94arc@kuis.kyoto-u.ac.jp so that we can pick up submission mails easily. We also ask you to attach information for contact to PS manuscript, preferably in the enclosed format. Best Regards, Prof. Shinji Tomita Prof. Hiroshi Nakashima Vice Program Chair (Architecture) Dept. of Information Science Dept. of Information Science Kyoto University Kyoto University --- % Fill the following and attach it to PS manuscript, and send them % together to ics94arc@kuis.kyoto-u.ac.jp % \title{} % title of the paper \authors{}{} % author's name and affiliation % % if two or more authors, duplicate this entry like; % % \authors{1st author's name}{1st author's affi.} % % \authors{2nd author's name}{1st author's affi.} % % : % % \authors{n-th author's name}{n-th author's affi.} \name{} % name of the person for further contact \zip{} % zip code and/or country name \address{} % surface mail address \organization{} % organization name \section{} % section name \tel{} % phone number \fax{} % facsimile number \email{} % e-mail address % % The following is an example % \title{The Architecture of a Massively Parallel Computer} % \authors{Shinji Tomita}{Kyoto Univ.} % \authors{Hiroshi Nakashima}{Kyoto Univ.} % \name{Shinji Tomita} % \zip{606-01, JAPAN} % \address{Yoshida Hon-Machi, Sakyou-Ku, Kyoto} % \organization{Kyoto University} % \section{Dept. of Information Science} % \tel{+81-75-753-5373} % \fax{+81-75-753-5379} % \email{tomita@kuis.kyoto-u.ac.jp} Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: TRACS Team Subject: Opportunities for Research Visits to Edinburgh Organization: Edinburgh Parallel Computing Centre ********************************************************************** ********************************************************************** ** ** ** ** ** TRACS: Training and Research on Advanced Computing Systems ** ** ** ** ** ********************************************************************** ********************************************************************** Edinburgh Parallel Computing Centre is coordinating an EC-funded project to bring European researchers for short visits to associated departments in Edinburgh to collaborate on projects involving High Performance Computing~(HPC). TRACS provides: * opportunities to visit and work in Edinburgh * access to a wide range of HPC systems, including + Thinking Machines CM-200 + Meiko CS-2 + Meiko I860 Computing Surfaces + Meiko T800 Computing Surface + Sun/SGI Workstation cluster * training, support and consultancy on parallel computing * accommodation, travel and subsistence expenses TRACS is open to both academic and industrial researchers resident in EC and EFTA countries. Application forms and further information are available both in electronic (latex/dvi) and paper formats and can be obtained from: TRACS Administrative Secretary EPCC James Clerk Maxwell Building University of Edinburgh Edinburgh EH9 3JZ United Kingdom Tel: +44 31 650 5986 Fax: +44 31 650 6555 Email: TRACSadmin@ed.ac.uk Please note that the closing date for applications to the next Scientific Users Selection Panel meeting is 4th January 1993. Newsgroups: comp.parallel.pvm,comp.graphics,comp.graphics.research,comp.graphics.visualisation Path: epcc.ed.ac.uk!tracs From: TRACS Team Subject: Opportunities for Research Visits to Edinburgh Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre Date: Tue, 16 Nov 1993 11:20:35 GMT Apparently-To: comp-parallel@uknet.ac.uk ********************************************************************** ********************************************************************** ** ** ** ** ** TRACS: Training and Research on Advanced Computing Systems ** ** ** ** ** ********************************************************************** ********************************************************************** Edinburgh Parallel Computing Centre is coordinating an EC-funded project to bring European researchers for short visits to associated departments in Edinburgh to collaborate on projects involving High Performance Computing~(HPC). TRACS provides: * opportunities to visit and work in Edinburgh * access to a wide range of HPC systems, including + Thinking Machines CM-200 + Meiko CS-2 + Meiko I860 Computing Surfaces + Meiko T800 Computing Surface + Sun/SGI Workstation cluster * training, support and consultancy on parallel computing * accommodation, travel and subsistence expenses TRACS is open to both academic and industrial researchers resident in EC and EFTA countries. Application forms and further information are available both in electronic (latex/dvi) and paper formats and can be obtained from: TRACS Administrative Secretary EPCC James Clerk Maxwell Building University of Edinburgh Edinburgh EH9 3JZ United Kingdom Tel: +44 31 650 5986 Fax: +44 31 650 6555 Email: TRACSadmin@ed.ac.uk Please note that the closing date for applications to the next Scientific Users Selection Panel meeting is 4th January 1993. To: comp-parallel@agate.berkeley.edu Path: beer From: beer@ICSI.Berkeley.EDU (Joachim Beer) Newsgroups: comp.ai,comp.arch,comp.databases,comp.misc,comp.multimedia,comp.robotics,comp.speech,comp.programming,comp.object,comp.software-eng,ucb.grads Subject: summer internships Date: 15 Nov 1993 23:55:44 GMT Organization: International Computer Science Institute, Berkeley, CA, U.S.A. Nntp-Posting-Host: icsib11.icsi.berkeley.edu The International Computer Science Institute (ICSI) is soliciting applications for their student summer internship program in Europe. The program is open to advanced graduate students in computer science at American universities. Applicants don't need to be U.S. citizens OR permanent residents of the U.S. to be eligible for the program. The selection process is soley based on merit and works roughly as follows: an application is submitted to ICSI where an initial selection takes place. ICSI does not have special application forms for the summer internship program. A cover letter stating the applicants intentions, transcripts, and one or two letters of recommendation is sufficient. It would also be very helpful if the applicant could provide a short proposal stating what he/she is interested in and the particular fields he/she would want to work in. The selected applications will be forwarded to those participating research labs that best match the applicants scientific interest and background. Depending on the applicants interest, background, and research proposal her/his application might be send to several of the research labs. It is the research labs that make the final decision. Current sponsor nations are Germany, Italy and Switzerland. ICSI is *not* able to support or process applications for internships in non-sponsor nations. Graduate students which have been invited by research labs in ICSI sponsor nations due to their own initiative or existing collaborations can apply for travel grants. However, ICSI will not be able to provide financial support beyond travel grants. Financial support provided by the hosting research lab is approximately $1800 per month for 3 month while ICSI provides travel grants up to $1500. Submit applications including at least one letter of recommendation, a list of completed course work, and a statement of intent to: International Computer Science Institute -Summer Internship Program 1947 Center Street, Suite 600 Berkley, CA 94704 ****************************** * * * DEADLINE March 1, 1994 * * * ****************************** Note: ICSI is only a clearinghouse for summer internship applications. ICSI is not able to answer question concerning specific research activities within participating research labs. In the past summer interns have been working in such areas as computer vision, expert systems, knowledge representation, natural language processing, software engineering, software tool development, etc.. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: martens@cis.ohio-state.edu (Jeff Martens) Subject: Optical Switching Speed? Organization: Ohio State U. Dept. of Computer Science I know that there are various factors affecting electronic switching speed as a function of wire length. What I want to know is, other than the speed of light, are there any limits to optical switching speeds that vary with the distance of the link? References would be especially appreciated. I'll summarize to the net if there's interest. Thanks in advance. -- "The superfluous is very necessary." -- Voltaire -- Jeff (martens@cis.ohio-state.edu) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: drg@cs.city.ac.uk (David Gilbert) Subject: Workshop on concurrency in computational logic : extended deadline Date: 16 Nov 1993 15:44:02 -0000 Organization: Computer Science Dept, City University, London Workshop on concurrency in computational logic December 13 1993 Department of Computer Science, City University, London, United Kingdom EXTENDED DEADLINE : 30 NOVEMBER 1993 (1) PLEASE NOTE THAT THE DEADLINE FOR CONTRIBUTIONS FOR THIS WORKSHOP HAS BEEN EXTENDED TO 30 NOVEMBER 1993 (2) ACCOMMODATION DETAILS BELOW (3) REGISTRATION DETAILS BELOW (4) LOCATION DETAILS BELOW +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ (1) Concurrency is a seminal topic in computer science, and is a research area of growing importance in the computational logic community. The methodologies required to describe, reason about and construct such systems encompass a wide range of viewpoints and some are still quite recent in origin. It is the aim of this workshop to collect researchers together in order to facilitate the exchange of ideas on concurrency in computational logic. Contributions are invited on the following topics: * system specification * semantics and theory * language design * programming methodologies * program analysis and transformation * programming environments Submissions can be extended abstracts or full papers, and should be limited to 15 pages. Electronic submission is preferred, as either LaTeX source or encapsulated postscript. Research students are particularly encouraged to make informal presentations of their research activities, based around a brief abstract, or to submit contributions for a poster display. Submissions should be sent to the following address and should be received by 31 October 1993 Dr. D. Gilbert Department of Computer Science, City University Northampton Square London EC1V 0HB UK email: drg@cs.city.ac.uk Proceedings will be distributed on an informal basis at the workshop to encourage presentation of ongoing work. However, it is intended that selected papers will be published in formal proceedings after the workshop. This workshop is organised jointly by City University and the University of Namur under the auspices of the Association of Logic Programming (UK), the British Council (UK), and the Commissariat General aux Relations Internationales (Belgium). Programme committee: Koen De Bosschere, University of Gent, Belgium David Gilbert, City University, UK Jean-Marie Jacquet, University of Namur, Belgium Luis Monteiro, University of Lisbon, Portugal Catuscia Palamidessi, University of Genova, Italy Jiri Zlatuska, Masaryk University, Czech Republic Important dates: Deadline for paper submission: 30 November 1993 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ (2) Accommodation Details Accommodation is available in the University Halls of residence at a cost of 19 pounds (ordinary room) or 26 pounds (superior room) per night per person, bed and breakfast. You should contact the Halls directly in order to make your booking; the details are: (a) Finsbury Residences Bastwick Street London EC1Y 3PE UK Tel: +44 71 251 4961 Fax: +44 71 608 2741 (b) Northampton Hall Bunhill Row London EC1Y 8LJ UK Tel: +44 71 628 2953 Fax: +44 71 374 0653 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ (3) Registration details Workshop on concurrency in computational logic December 13 1993 Department of Computer Science, City University, London, United Kingdom REGISTRATION FORM THERE IS NO CHARGE FOR ATTENDING THE WORKSHOP PLEASE FILL IN AND RETURN BY EMAIL, FAX OR POST TO: Dr David Gilbert Department of Computer Science City University, Northampton Square, London EC1V 0HB, UK tel: +44-71-477-8444 (direct) fax: +44-71-477-8587 email: drg@cs.city.ac.uk uucp: drg@citycs.uucp --------------------------------[FORM BEGIN]----------------------------------- I wish to attend the Workshop on concurrency in computational logic on December 13 1993 at the Department of Computer Science, City University. FIRST NAME: FAMILY NAME: INSTITUTION: ADDRESS: EMAIL: TELEPHONE: FAX: --------------------------------[FORM END]------------------------------------- +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ FURTHER INFORMATION (4) Location details The workshop will be held from 10.00 to 17.00 on Monday 13 December in the Senate Suite (Room A537), College Building, City University. Further detailed directions can be obtained from David Gilbert (see above). -- D R Gilbert tel: +44-71-477-8444 (direct) Department of Computer Science fax: +44-71-477-8587 City University, Northampton Square email: drg@cs.city.ac.uk London EC1V 0HB, UK uucp: drg@citycs.uucp Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.theory From: ss7540@csc.albany.edu (SHUKLA SANDEEP) Subject: Re: Performance evaluation of distributed algorithms Organization: State University of New York at Albany References: <1993Nov16.151830.3376@hubcap.clemson.edu> In article <1993Nov16.151830.3376@hubcap.clemson.edu> sgarg@egr.duke.edu (Sachin Garg) writes: >In performance evaluation of distributed algorithms, people generally >talk in terms of time and message complexity. However, in some models >of distributed computations (CSP, CCS), which were developed largly >for correctness verification, time has been incorporated to obtain >"finishing time of a program". It is either a probabilistic study, in which >case an expected finishing time is reported or a deterministic study >in which "upper or lower bounds" on the performance metric are reported. The timed models of CSP or CCS are not aiming at the performance evaluation of the Distributed systems. The TCSP and TCCS both aims at incorporating the realtime concepts into the models of Computations. For example in the TCSP model the aim is to achieve a proof system to reason about the liveness and safety properties of Distributed systems with timing constraints. The Probabilistic CSP is an extension of the Timed model that not only reasons about the timing properties of the sytem also proves properties that hold probabilistically. A typical quotation by the authors of Timed CSP follows: "A unified theory of Concurrency may be developed by adding real time probablility to the models of real time CSP. Such a model will allow a proper Universal measure of Fairness,e.g '' Within 3.75 milliseconds, there is 93.7% chance that a process will respond''. " --- G.M Reed I think that most models like CSP, CCS or the probabilistic UNITY (J.R.Rao) when they incorporate time and probability in their models, the basic idea is that " It will facilitate the comparison and Unification of many different methods presently used to reason about Concurrent systems, and promote a far deeper understanding of Concurrency in general."(Reed). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: yali@ecn.purdue.edu (Yan Alexander Li) Subject: Re: The Future of Parallel Computing Organization: Purdue University Engineering Computer Network References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> <1993Nov16.165313.15793@hubcap.clemson.edu> The discussions of this thread have focused on software vs. hardware issue. The implied architecture is the homogeneous. I would like to suggest that we consider heterogeneous processing also. Imagine that we can write a application without worrying about the architecture. The compiler will do the analysis and dispatch different part of the program to the component (sub)systems that best suit them. A cluster of heterogeneous parallel systems tightly coupled by highbandwidth network or simply a heterogeneous parallel computer itself can exploit the heterogeneous architecture to cope with different characteritics of different part of one application. This naturally brings up the problem of standardization in algorithm description, high level language, compiler, OS and other software issues. It is both hardware and software issue, but IMHO the software people may have to make much more effort to have this work. -- Alex Li Graduate Assistant 1285 Electrical Engineering Purdue University West Lafayette, IN 47907 yali@ecn.purdue.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: mash@mash.wpd.sgi.com (John R. Mashey) Subject: Re: INMOS Information available on T9000 Organization: Silicon Graphics, Inc. References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> In article <1993Nov16.151841.3483@hubcap.clemson.edu>, elwin@media.mit.edu (Lee W Campbell) writes: |> For the fun of it I checked it out. What was available was a Q&A and a press |> release. I got the press release. It's kind of amusing: |> |> 26 March, 1993. London, UK. INMOS announces sample |> availability of the T9000 transputer. Just three years |> after design commenced, ... |> |> Now I *know* I was hearing about this chip in the summer of '91 and Hot Chips, August 1991, had a presentation from Richard Forsyth, Bob Krysiak, Roget Shepherd on the T9000. In this, they say "T9000 available Q1 1992"; obviously things didn't work out that way ... but that's not unusual in this business... ACtually, as related quesiton, in that same Hot Chips was a description of the National Semiconductor Swordfish for embedded applications. I haven't heard much of that lately. CAn anyone shed some light on that one? -john mashey DISCLAIMER: UUCP: mash@sgi.com DDD: 415-390-3090 FAX: 415-967-8496 USPS: Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DKGCampb@cen.ex.ac.uk Subject: Re: The Future of Parallel Computing Organization: University of Exeter, UK. References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> <1993Nov16.165203.15351@hubcap.clemson.edu> In article <1993Nov16.165203.15351@hubcap.clemson.edu> Murray Cole writes: >In article <1993Nov15.154454.14783@hubcap.clemson.edu>, dbader@eng.umd.edu (David Bader) writes: > >> You need to look at the "granularity" of a problem to decide whether >> it will perform faster on a parallel machine. (For an introduction to >> granularity, see Stone, "High Performance Computer Architecture", Section 6.2). >> >> If your machine is meant for course-grained problems (such as the case you >> outline above), you will need to sum "n > N" numbers to see a speedup, where "N" >> is some large threshold for the given algorithm and machine size. > >That's what I'm getting at. The algorithm is clear and simple, but the peculiarities >of this machine or that machine mean that I can't be sure if its going to work >well (even if the underlying network is right), without lifting the lid. This >seems a little unfortunate. But then, if one had a "general" model of parallel computation, all machines would be able to be specified in terms of the model. So, performance could be predicted in that way, and portability would be provided for. Hopefully the general model would be efficient enough to provide a reasonable level of performance. However, if high performance is desired, then knowledge of the particular peculiarities of the target architecture, whether it be parallel or serial, would be required. In which case, one has "special purpose" computing. -- Duncan Campbell Acknowledgement: I'd like to thank me, Department of Computer Science without whom none of this University of Exeter, would have been possible. Prince of Wales Road, Exeter EX4 4PT Tel: +44 392 264063 Telex: 42894 EXUNIV G United Kingdom Fax: +44 392 264067 e-mail: dca@dcs.exeter.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: The Future of Parallel Computing Organization: Professional Student, University of Maryland, College Park References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> <1993Nov16.165203.15351@hubcap.clemson.edu> Nntp-Posting-Host: pepsi.eng.umd.edu In article <1993Nov16.165203.15351@hubcap.clemson.edu> Murray Cole writes: >In article <1993Nov15.154454.14783@hubcap.clemson.edu>, dbader@eng.umd.edu (David Bader) writes: > >> You need to look at the "granularity" of a problem to decide whether >> it will perform faster on a parallel machine. (For an introduction to >> granularity, see Stone, "High Performance Computer Architecture", Section 6.2). >> >> If your machine is meant for course-grained problems (such as the case you >> outline above), you will need to sum "n > N" numbers to see a speedup, where "N" >> is some large threshold for the given algorithm and machine size. > >That's what I'm getting at. The algorithm is clear and simple, but the peculiarities >of this machine or that machine mean that I can't be sure if its going to work >well (even if the underlying network is right), without lifting the lid. This >seems a little unfortunate. Murray, have you looked at Stone's book? A quick review follows. Granularity isn't a "peculiarity" of a parallel machine. It is a measure of the ratio of computation time (R) to communication time (C) of the hardware. If, for instance, the communication is slow in a machine, C is large, making the granularity R/C small. And vice versa; if C is small, then the granularity R/C is large. A parallel machine's compiler obviously must know its physical node hardware, at least to obtain as close to optimal data layout as possible, and must know the interconnection network. Therefore, requiring it to know something about the granularity is acceptable. The granularity measure is similar to the stopping criterion in a recursive algorithm. The maximum size of an instance of a problem, where the time to execute that algorithm on a single processor is faster than distributing it onto more than 1 processors, is this criterion induced by the granularity. Most data parallel algorithms that I have worked with will run parallel steps on the input set until reaching this limit. When the input is reduced to this problem size, the best known sequential algorithm is then used to find the solution. I don't believe that the platform described above requires the user to know the intricate details of the hardware. Just my thoughts, --david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: moorej@research.CS.ORST.EDU (Jason Moore) Subject: Re: Need introductory parallel programming book Message-ID: <2cbo97INNikk@flop.ENGR.ORST.EDU> Date: 16 Nov 93 23:38:15 GMT Article-I.D.: flop.2cbo97INNikk Posted: Tue Nov 16 15:38:15 1993 References: <1993Nov11.213044.26194@hubcap.clemson.edu> <1993Nov16.165105.15024@hubcap.clemson.edu> Reply-To: moorej@research.CS.ORST.EDU Organization: Oregon State University, Computer Science Department NNTP-Posting-Host: shale.cs.orst.edu |> @book{Quinn, |> author = {Michael J. Quinn}, |> address = {New York}, |> publisher = {McGraw-Hill}, |> title = {Designing Efficient Algorithms for Parallel Computers}, |> year = {1987} Michael Quinn has a 2nd Edition of the above book called Parallel Computing: Theory and Practice It's only been available for a few weeks. Try it, you'll like it. Jason -- -------- Jason Moore Internet: moorej@research.cs.orst.edu Department of Computer Science Bell-net: (503) 737-4052 Oregon State University "Anything worth doing is worth doing with a smile" Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: John Corb Newsgroups: comp.sys.transputer,comp.arch,comp.parallel Subject: Re: INMOS Information available on T9000 References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> Reply-To: John Corb Organization: NCR CSSD Network Systems, London In article <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu (Lee W Campbell) writes: >In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: > > The T9000 is the world's fastest single-chip computer, with > its 200 MIPS, 25 MFLOPS peak performance and its 32-bit > superscalar integer processor, 64-bit floating point unit, > virtual channel processor, 100Mbits/s communications ... > >World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, >R4000, supersparc, Pentium, and roughly comperable to a fast '486, so >how in the hell do they manage to call it "fastest'??? they used sneaky wording - "single-chip computer" ^^^^^^^^ the alpha, pa-risc etc. are microprocessors, the t9000 is a microcomputer as it has cpu+memory+i/o all on chip, so it is a lot faster than 8051, z8 etc. (but them so's my pocket calculator :) they are trying to hype it as fast and it was the best they could come up with, it's a shame 'cos the t9000 is actually quite slick, sad huh? -- Net: john.corb@UnitedKingdom.NCR.COM Manager Network Support UUCP: uunet!ncrcom!ncruk!support!acid!john TEL: +44 71 725 8837 NIC handle: jc716 Voice+: 323-5187 FAX: +44 81 446 8269 when the going gets weird, the weird turn pro Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pkrishna@cs.tamu.edu (P. Krishna) Subject: Call For Papers - FTPDS-94 Organization: Texas A&M Computer Science Department, College Station, TX CALL FOR PAPERS The 1994 IEEE Workshop on Fault-Tolerant Parallel and Distributed Systems June 13-14, 1994 College Station, Texas Sponsored by IEEE Computer Society Technical Committee on Fault-Tolerant Computing In cooperation with IFIP Working Group 10.4 Texas A&M University In conjunction with FTCS-24 Objectives The goal of this workshop is to provide a forum for researchers to present and exchange research results and advances in the field of Fault-Tolerant Parallel and Distributed Systems. Both hardware and system issues are of interest. Topics Topics of interest in the workshop include (but not limited to): *Fault-Tolerant Multiprocessor Systems *Fault Model Issues *Novel Hardware Architectures *Fault-Tolerant Networks *High Speed Microprocessor Issues *Experimental Systems *Formal Methods for Specification, Design *Recovery Techniques and Verification of Parallel and Distributed *Software Fault Tolerance Systems *Real-Time Systems *Reliable Design and Synthesis Tools *Empirical Studies and System Validation Participation Send five copies of a full manuscript not exceeding 5000 words to the Program Chairman by February 18, 1994. All submissions must be original and never published. Revised papers will appear in a book published by IEEE Computer Society Press following the workshop. All panel proposals should be received no later than March 15, 1994. For additional information concerning the workshop, please contact the General Chair. Any questions concerning the program or paper submission should be directed to the Program Chairman. General Chairman Program Chairman Dhiraj K. Pradhan Dimiter R. Avresky Department of Computer Science Dept. of Computer Science Texas A&M University Texas A&M University College Station, TX 77843-3112 College Station, TX 77843-3112 Phone: (409) 862-2438 Phone: (409) 862-4389 FAX: (409) 862-2758 FAX: (409) 862-2758 Email: pradhan@cs.tamu.edu Email: avresky@cs.tamu.edu DEADLINES Paper Submission: February 18, 1994 Notification of Acceptance: April 15, 1994 Copy of Revised Paper: August 15, 1994 Co-Chair Co-Program Chair David Rennels Herman Kopetz UCLA Institute fur Praktische Informatik TU of Vienna Vice General Chair Vice Program Chair Fabrizio Lombardi Nitin Vaidya Texas A&M University Texas A&M University Publications Chair Treasurer Jeffrey A. Clark Jennifer Welch The MITRE Corp. Texas A&M University Local Arrangements Chair Registration Chair Duncan M. Walker Wei Zhao Texas A&M University Texas A&M University Program Committee K. Birman (USA) Y. Levendel (USA) N. Bowen (USA) S. Low (USA) D. Bossen (USA) E. Maehle (Germany) J. Bruck (USA) A. Nordsieck (USA) B. Ciciani (Italy) U. Pooch (USA) M. Dal Cin (Germany) W. Sanders (USA) A. Costes (France) A. Sengupta (USA) F. Cristin (USA) D. Siewiorek (USA) W. Debany (USA) S. Shirvastava (UK) A. Goforth (USA) P. Sollock (USA) J. Hayes (USA) N. Suri (USA) R. Iyer (USA) K. Trivedi (USA) Y. Koga (Japan) P. Verissimo (Portugal) I. Koren (USA) Y. Kakuda (Japan) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: dcp79807@csie.nctu.edu.tw (Chih-Zong Lin) Newsgroups: comp.parallel,comp.lang.fortran Subject: REALIGN & REDISTRIBUTE! Date: 17 Nov 1993 07:11:38 GMT Organization: Dep. Computer Sci. & Information Eng., Chiao Tung Univ., Taiwan, R.O.C Dear Netter: I am a PhD student at NCTU and have some question about data alignment and distribution. In the definition of High Performance Fortran, REALIGN and REDISTRIBUTE are provided to change data allocation dynamically. But, when is appropriate to use these directives? Is there any real applications that is suitable to use these directives? -- Regards Miller ------------------------------------------------------------------------------ Chih-Zong Lin Email: dcp79807@csie.nctu.edu.tw Department of Computer Science and Information Engineering National Chiao-Tung University Hsinchu, Taiwan, Republic of China ------------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edwin@dutind2.twi.tudelft.nl (Edwin Vollebregt) Subject: Re: The Future of Parallel Computing Organization: Delft University of Technology References: <1993Nov9.163338.9165@hubcap.clemson.edu> <1993Nov10.132809.21164@hubcap.clemson.edu> <1993Nov16.152328.4885@hubcap.clemson.edu> In article <1993Nov16.152328.4885@hubcap.clemson.edu>, msodhi@agsm.ucla.edu (Mohan Sodhi) writes: > >Jeff> PLEASE don't forget about portability. If the same code can be > >Jeff> compiled onto multiple architectures, it will make the > >Jeff> programmer's job MUCH MUCH easier. (Tune to architecture as > >Jeff> needed, instead of rewrite from scratch.) > > >I agree 100%. > > The hopes of the authors' above are a little too pie-in-the sky. For one, > even for serial computers, porting an application from one operating system > to another can take _months_ and involve a lot of new code. That is, if the original application was not designed with portability in mind. We had to port an application of 30.000 lines FORTRAN from one UNIX system to another, and did it within a day. Then we translated the FORTRAN code to C by f2c, and compiled this again without problems on the second UNIX system. Finally we ported the C version to one transputer. As soon as we obtained the transputer version of f2c (that is: as soon as we obtained a FORTRAN compiler for the transputer) this again was no problem. Conclusion: the application does not make use of OS specific things. > Second, > developing architecture-free algorithms does not mean *no new code* -- it just > means no new math to be worked out. I do not think it is possible to > have a program compile under different architectures (even if the algorithm > is unchanged) with just a few compiler directives; I am not even sure > this is desirable. I do think this is possible in the future. > One thing at a time: let us concentrate on architecture > free algorithms for now (in my area, operations research, this itself is a > very new concept!): this will take our minds off the tooth fairy who will > write a program that will compile under every computer architecture and every > operating system. Not all algorithms are _architecture_free_, consider for instance the summation of N variables, or the computation of an inner product. This can be done very well on a hypercube, but not on a linear array or ring of processors. Thus an algorithm that relies heavily on inner products is not _architecture_free_. If the fastest sequential algorithms are not _architecture_free_, then it is not possible to write efficient, portable code. Conclusion: a scientist must know for which architectures his algorithm is suitable, and (s)he must tune the code to the architecture, or architecture to application. > Mohan Sodhi > msodhi@agsm.ucla.edu > Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: taylors@westminster.ac.uk (Simon Taylor) Subject: Conference: 2ND EUROMICRO WORKSHOP ON PARALLEL AND DISTRIBUTED PROCESSING ----------------------------------------------------------- 2ND EUROMICRO WORKSHOP ON PARALLEL AND DISTRIBUTED PROCESSING University of Malaga, Spain January 26-28th 1994 ----------------------------------------------------------- ABOUT THE WORKSHOP Parallel processing is a crucial strategic element in the race for sustainable teraflop performance across a wide range of computing applications. Major advances in parallel and distributed processing technology over the last decade have resulted in a wide range of hardware architectures and a host of parallel languages. However, as with serial processing in the 70's, parallel computing has entered its own software crisis; innovative techniques are now required to provide an effective engineering life-cycle for parallel systems. This workshop, the second organised by Euromicro, aims to provide a forum whereby users and developers of distributed computer technology can present and discuss latest research results in the quest for usable parallel computers. The workshop will include keynote sessions with invited eminent speakers; sessions for open forum presentations; group and plenary discussion sessions; and full paper presentation sessions covering a wide range of topics in: parallel computer architecture and software methodology; parallel system performance engineering; and distributed application modelling. GENERAL INFORMATION Dates Workshop dates: January 26-28, 1994. Registration: Wednesday, January 26 at 8.30 - 9.15am. Reception: Wednesday January 26 at Hotel Melia Costa del Sol at 7.30pm Workshop Venue Hotel Melia Costa del Sol Playa del Bajondillo E-29620 Torremolinos-Malaga Tel. +34 (9) 5 238 66 77 Fax: +34 (9) 5 238 64 17 Telex: 77326 METEL E Travel The Workshop will be held at the Hotel Melia Costa del Sol, which is situated 15 kms west of the centre of Malaga. Malaga international airport is 13 kms away from the city centre, and 7 kms from the Hotel Melia. A taxi from Malaga airport to the Hotel Melia costs about 1200 Ptas. You can also take the train from Malaga airport to Torremolinos (100 Ptas), and then walk for about 10 minutes to the Hotel Melia. Climate Malaga is the capital of the Costa del Sol, which is sunny most of the time. The climate is usually warm and pleasant at the end of January in Malaga; the average daytime temperature is 170C. Currency The unit of currency in Spain is the Peseta. At the end of September the exchange rate was 1 US$ = 126 Ptas . Registration Kindly fill in the enclosed form. Please ensure that all relevant details are completed and return the form together with the appropriate fees to the Euromicro office as soon as possible. The registration fees are quoted in Dutch guilders (Dfl). Accommodation A hotel reservation form is included. Please SEND OR FAX THE BOOKING FORM BEFORE JANUARY 1ST 1994, DIRECTLY TO THE TRAVEL AGENCY. Other Events and Attractions Before and after the workshop you can take part in tourist events in Andalucia. There are about a hundred golf courses in the Costa del Sol. You may ski in the Sierra Nevada mountains (two hours drive from the Hotel Melia), where the 1995 World Ski Championship will be celebrated. The Travel Agency has planned two tours for the Workshop participants: Nerja-Frigiliana and Alhambra-Granada. Prices and conditions are specified in the Accommodation Registration Form. Nerja - Frigiliana: Tuesday, January 25th. Depart by coach at 9am from Hotel Melia, taking the coast road through several villages to reach the picturesque town of Nerja. Visit the impressive prehistoric caves and take lunch in Meson Toledano. Spend the afternoon in the beautiful Andalusian village of Frigiliana. Granada - Alhambra: Saturday, January 29th. Depart by coach at 8.30am from Hotel Melia, making a short stop in Riofrio en route to Granada. Visit the Alhambra fortress, famous monument to Arabic architecture; the Charles V Palace; and gardens of the Generalife, enjoying panoramic views of the city, the Sierra Nevada mountains and the historic quarters of Sacromonte and Albaicin. Lunch in restaurant Alixares, and spend some free time in Granada. Cancellation Charges These apply to Workshop registration and accommodation bookings made through the local travel agency, as follows: Up to January 1st, 1994: 90% reimbursement Up to January 15th, 1994: 10% reimbursement TECHNICAL PROGRAMME Wednesday 26th January 1994 8.30 - 9.15 Registration and Coffee 9:15 - 10.00 Keynote Session Chairman: S C Winter (UK) Speaker: Prof H Zima (A) High Performance Languages and their Compilation 10.00 - 10.20 Coffee 10.20 - 11.30 Session 1 Image Processing Chairwoman: E Pissaloux (F) Representation and Measurement of Non-Rigid Egocentric Motion: A Parallel Implementation G J McAleese and P J Morrow (UK) Image Processing on Parallel Machines: a Protocol for Managing Global Objects Cremonesi, N Scarabottolo and D Sorrenti (I) Efficient Implementation of an Abstract Programming Model for Image Processing on Transputers D Crookes and T J Brown (UK) 11.40 - 12.50 Session 2 Parallelization Chairman: A Tyrrell(UK) How To Implement Distributed Algorithms Efficiently R Diekmann, K Menzel, F Stangenberg (D) Parallel Discrete Event Simulation Using Space-Time Events H Ahmed, L Barriga and R Ayani (S) Deterministic Parallel Execution of Sequential Programs N Kalantery, S C Winter and D R Wilson (UK) 12.50 - 14.00 Lunch 14.00 - 15.30 Session 3 Parallel Architectures Chairman: D Tabak (USA) A New Massively Parallel Architecture Relying on Asynchronous Communications S Mohammadi, D Dulac, A Merigot (F) Creatures and Spirals: A Data Parallel Object Architecture I Stephenson and R Taylor (UK) Zelig: A Novel Parallel Computing Machine Using Reconfigurable Logic N Howard, N Allinson, A Tyrrell (UK) Harp: A Statically Scheduled Multiple-Instruction-Issue Architecture and its Compiler R Adams, S Gray, G Steven (UK) 15.30 - 15.45 Coffee 15.45 - 16.30 Session 4 Neural Nets Chairman: K Grosspietsch (D) Introducing Parallelism in Power Systems Diagnosis C Rodriguez, J Muguerza, J J Martin, A Laguente and S Rementeria (E) A Transputer-based Neural Network to Determine Object Positions from Tri-aural Ultrasonic Sensor Data J Chen and J Van Campenhout (B) 16.30 - 18.00 Session 5 Networks and Communications Chairman: G F Ciccarella (I) A New Approach to Concurrent Ring: 1 Bit Latency Sandoval, C Sandoval, A Suarez and C Ramfrez (E) Integrating Memory and Network Accesses: A Flexible Processor-Network Interface for Efficient Application Execution Y Chen and C King (TC) Evaluation of a Priority Flow Control Scheme for Multicomputer Networks A H Smai and H Wu (S) Mad-Postman: a Look-ahead Message Propagation Method for Static Bidimensional Meshes C Izu (UK) 18.30 - 19.30 Open Forum Session Chairman: S J E Taylor (UK) Papers to be announced 19.30 Reception Thursday 27th January 1994 8.30 - 9.15 Session 6 Formal Methods Chairman: F Tirado (E) A BDD Package for a Massively Parallel SIMD Archtecture G P Cabodi, S Gai, M Rebaudengo and M S Reorda (I) Change Diagram: A Model for Behavioural Description of Asynchronous Circuit/Systems E Pissaloux, A Kondratyev, A Taubin and V Varshavsky (F) 9.15 - 10.00 Keynote Session Chairman: E Zapata (E) Speaker: Prof M Valero (E) Efficient Access to Streams in Multi-Module Memories 10.00 - 10.30 Coffee 10.30 - 12.45 Session 7 Parallel Numerical Algorithms Chairman: A Gonzalez (E) A New Algorithm For Singular Value Decompositions M Ralha (P) Numerical Aspects and Solution of some Nonlinear Scrodinger Systems on a Distributed Memory Parallel Computer I Martin, F Tirado and l Vazquez (E) An efficient parallel implementation of the ML-EM algorithm for PET image reconstruction with a multi-threaded operating system K Bastiaens, I Lemahieu, P Desmedt and W Vandermeersch (B) 3D Reconstruction of Macro-molecules on Multiprocessors R Asenjo, J Cabaleiro, J Carazo and E Zapata (E) Parallel Simulated Annealing: Getting Super Linear Speedups A Genco (I) The Enhancement of Parallel Numerical Linear Algorithms using a Visualisation Approach E Stuart and J Weston (UK) 12.45 - 14.00 Lunch 14.00 - 15.30 Group Discussion Sessions 15.30 - 15.45 Coffee 15.45 - 16.30 Session 8 Load Balancing Chairman: R McConnell (UK) The Efficient Management of Task Clusters in a Dynamic Load Balancer W Joosen, J Pollet and P Verbaeten (B) The Benefits of Migration in a Parallel Objects Programming Environment A Ciampolini, A Corradi, L Leonardi and F Zambonelli (I) 16.30 - 18.00 Session 9 Distributed Operating Systems Chairman: N Scarabottolo (I) Some Issues for the DistrIbuted Scheduling Problem in the MO2 Distributed Real-Time Object-Oriented System B Mecibah and A Attoui (F) A Distributed Algorithm for Fault-Tolerant Dynamic Task Scheduling E Maehle and F Markus (D) Three Domain Voting in Real-Time Distributed Control Systems J Bass, P Croll, P Fleming and L Woolliscroft (UK) The Helios Tuple Space Library L Silva, B Veer and J Silva (P) 18.30 - 19.30 Open Forum Session Chairman: S J E Taylor (UK) Papers to be announced Evening free Friday 28th January 1994 8.30 - 9.15 Session 10 Real Time Systems Chairman: P Kacsuk (H) Real Time Implementation of a Multivariable Adaptive Controller G Ciccarella, F Loriga, P Marietti (I) A Switch Architecture for Real-Time Multimedia Communications G Smit and P Havinga (NL) 9.15 - 10.00 Keynote Session Chairman: M. Valero (E) Speaker: Prof D Padua (USA) Evaluation of Parallelization Techniques 10.00 - 10.15 Coffee 10.15 - 12.30 Session 11 Performance Engineering Chairman: J M Troya (E) Evaluation of Benchmark Performance Estimation for Parallel Fortran Programs on Massively Parallel SIMD and MIMD Machines T Fahringer (A) Workload Characterization for Performance Engineering of Parallel Applications E Rosti and G Serazzi (I) Combining Functional and Performance Debugging of Parallel and Distributed Systems based on Model-driven Monitoring P Dauphin (D) A New Trace & Replay System for Shared Memory Programs based on Lamport Clocks L Levrouw, K Audenaert and J Van Campenhout (B) Monitoring, Analysis and Tuning of Parallel Programs within the FortPort Migration Environment R McConnell and P Milligan (UK) Supporting Process Migration Through Communicating Petri Nets G Bucci, R Mattolini and E Vicario (I) 12.30 - 13.30 Lunch 13.30 - 15.00 Session 12 Parallel Programming Chairman: P Milligan (UK) Improving Performance with Serialisation G R Justo and P H Welch (UK) Wavefront Scheduling in Logflow P Kacsuk (H) Implementing Distributed Reactive Programs in Linda A Clematis and V Gianuzzi (I) Multi-Level Copying for Unification in Parallel Architectures A Ciampolini, E Lamma, P Mello and C Stefanelli (I) 15.00 - 15.15 Coffee 15.15 - 16.45 Session 13 Software Engineering Chairman: E Luque (E) Automatic Data Distribution and Parallelization A Dierstein, R Hayer and T Rauber (D) Transputer Based System Software E Luque, M Senar, D Franco, P Hernandez, E Heymann and J Moure (E) A Real-Time Multiprocessors Application Development Environment Design and Implementation N Zergainoh, T Maurin, Y Sorel and C Lavarenne (F) A Life-Cycle for Parallel and Distributed Systems based on Two Formal Models of Concurrency M Bettaz, G Reggio (Alg) 16.45 - 18.30 Plenary Discussion Session 20.00 Farewell Dinner ORGANISERS Organising Chairman Emilio L. Zapata Dept. Arquitectura de Computadores University of Malaga Plaza El Ejido, s/n E-29013 Malaga Spain Tel: +34 5 213 1404 Fax: +34 5 213 1413 Email: ezapata@ctima.uma.es Programme Chairman Stephen Winter Centre for Parallel Computing University of Westminster 115 New Cavendish St. London W1M 8JS United Kingdom Tel: + 44 71 911 5099 Fax: + 44 71 911 5143 Email: wintersc@westminster.ac.uk Deputy Programme Chairman Simon Taylor Centre for Parallel Computing University of Westminster 115 New Cavendish St. London W1M 8JS United Kingdom Tel: +44 71 911 5000 ext 3586 Fax: +44 71 911 5143 Email: taylors@westminster.ac.uk Euromicro Office Mrs Chiquita Snippe-Marlisa P.O. Box. 2346 NL 7301 EA Apeldoorn The Netherlands Tel: +31 55 557 372 Fax: +31 55 557 393 Email: chiquita@info.vub.ac.be ----------------------------------------------------------- ACCOMMODATION RESERVATION FORM 2nd Euromicro Workshop on Parallel and Distributed Processing January 26-28th 1994, University of Malaga, Spain Name ________________________________________________ Company/Institution ____________________________________ Address ________________________________________________ City ____________________ Country _____________________ Phone ____________________ Fax _________________________ OFFICIAL HOTEL Hotel Melia Costa del Sol (4 star) Rate per room and night including breakfast (VAT included): DOUBLE (2 persons) 6.510,-Ptas SINGLE USE (1 person) 5.100,-Ptas Please reserve _______ Double room(s) and/or _______ Single Use room(s) Arrival date _______________ Departure date _______________ TOURIST VISITS + Nerja-Frigiliana (Tuesday, January 25th) 4.850,-Ptas (VAT included) + Granada - Alhambra (Saturday, January 29th) 7.950,-Ptas (VAT included) Please reserve _________ persons to visit Nerja-Frigiliana and/or _________ persons to visit Alhambra-Granada RESERVATION DEPOSIT To confirm the reservation, payment of the following deposit is necessary: Hotel = 25% Tourist visits = 50% All deposits will be deducted from your final bill which must be settled before leaving. Note: The tourist visits will be cancelled if there are less than 30 participants and in this case 97.5% of the registration will be refunded. Date _______________ Total Deposit _________________ Ptas METHODS OF PAYMENT + By bank draft in Pesetas payable to VIAJES INTERSOL, S.A. by a Spanish Bank. + By bank transfer: BANCO DE SANTANDER (c/o VIAJES INTERSOL, A.S.). Account number: 20187. Agency number: 609. Gran Via, 80. E-28013 Madrid. + By Credit Card (Visa): Card Number ___________________________________ Expiry date ___________________________________ Name of Cardholder _____________________________ Signature of Cardholder ________________________ ----------------------------------------------------------- Please send this form together with the cheque or copy of your bank transfer, where appropriate, to VIAJES INTERSOL, S.A. Avda. Palma de Mallorca, 17 E-29620 Torremolinos-Malaga. Spain Phone: +34 5 238 3101 / +34 5 238 3102 Fax: +34 5 237 2905 Telex: 79307 WLCS E Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Herman.te.Riele@cwi.nl (Herman te Riele) Subject: Parallel Computational Number Theory and Cryptography Symposium Organization: CWI, Amsterdam CWI - RUU SYMPOSIA "MASSIVELY PARALLEL COMPUTING AND APPLICATIONS" In 1993-1994, the Centre for Mathematics and Computer Science Amsterdam (CWI) and the University of Utrecht (RUU) are organising a series of symposia on massively parallel computing and applications. This is to announce the third meeting which centres around the theme: COMPUTATIONAL NUMBER THEORY AND CRYPTOGRAPHY Date: Friday November 26, 1993 Location: CWI, Kruislaan 413, Amsterdam Room: Z011 Program 10.00 - 10.30: Coffee/Tea 10.30 - 10.35: Welcome 10.35 - 11.20: Jean-Jacques Quisquater (Catholic University of Louvain, Belgium) Exhaustive searches, collisions, meet-in-the-middle attacks: a parallel perspective 11.30 - 12.15: Francois Morain (Ecole Polytechnique, Palaiseau, France) Distributed primality proving 12.15 - 13.30: Lunch break 13.30 - 14.15: Johannes Buchmann (Universitaet des Saarlandes, Germany) Factoring with the number field sieve 14.25 - 15.10: Peter L. Montgomery (Stieltjes Institute for Mathematics, Leiden, and CWI Amsterdam) Vectorization of the elliptic curve method 15.10 - 15.30: Tea break 15.30 - 16.15: Henk Boender (RU Leiden, and CWI Amsterdam) Factoring with some variations of the quadratic sieve on the Cray Y-MP4 Dates and themes of the previous meetings: June 4, 1993: Topics in Environmental Mathematics Sept. 24, 1993: Parallel Numerical Algorithms For further information, e.g., about how to reach CWI, contact H.J.J. te Riele (CWI, tel. 020-5924106) If you wish to receive a LaTeX-file of the abstracts of the lectures, send a message to herman@cwi.nl Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: matthew@colussus.demon.co.uk (Matthew Harrison) Subject: Re: INMOS Information available on T9000 References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> Organization: BIO-RAD Microscience Reply-To: matthew@colussus.demon.co.uk In article <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu writes: > availability of the T9000 transputer. Just three years > after design commenced, ... > In a different life, I was working on a project in 1990 which used T800s. In 4Q of that year we heard about the T9000, and I think at that time Inmos said samples would be available in 2Q 1991. Just 2.5 years optimistic. It was too long for us to wait... We were thinking at the time, why didn't they do what Nottingham University have done this year, and make a virtual channel coprocessor? T800s are a bit long in the tooth, now, but with enough of them, and an efficient way of feeding them with data collecting the results, that is not a nightmare to write software for, you've got a reasonable bang per buck. > > World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, > R4000, supersparc, Pentium, and roughly comperable to a fast '486, so > how in the hell do they manage to call it "fastest'??? I think the emphasis is on "single-chip computer", onchip serial comms and all that. But you're right, I don't think they've updated their release material since the T9000 was originally supposed to have been available. -- Matthew Harrison (who is not a spokesperson for the company he works for) Does anybody here speak for their company? :-) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: westman@tp.umu.se (Olof Westman) Newsgroups: comp.sys.transputer,comp.arch,comp.parallel Subject: Re: INMOS Information available on T9000 Organization: University of Umea, Sweden References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu (Lee W Campbell) writes: > The T9000 is the world's fastest single-chip computer, with >World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, >R4000, supersparc, Pentium, and roughly comperable to a fast '486, so >how in the hell do they manage to call it "fastest'??? The keyword here is *single-chip computer*. As opposed to single chip CPU. I agree on the other points. I read about the T9000 in september 1990. It was called the H1 att the time and I got interested. That interest died 2 years later. If Inmos wants the transputer to be used for anything else than sewing machines and stuff they have to produce new processors more frequently. Every eight years won't do. Olof Westman sysadmin Dept. Phys. Univ. of Umeaa westman@tp.umu.se From: dcp79807@csie.nctu.edu.tw (Chih-Zong Lin) Newsgroups: comp.compilers Subject: REALIGN & REDISTRIBUTE! Date: 17 Nov 1993 07:11:38 GMT Organization: Dep. Computer Sci. & Information Eng., Chiao Tung Univ., Taiwan, R.O.C Nntp-Posting-Host: dcp79807%@pdp3.csie.nctu.edu.tw X-Newsreader: TIN [version 1.2 PL0] [Cross post from comp.parallel] Dear Netter: I am a PhD student at NCTU and have some question about data alignment and distribution. In the definition of High Performance Fortran, REALIGN and REDISTRIBUTE are provided to change data allocation dynamically. But, when is appropriate to use these directives? Is there any real applications that is suitable to use these directives? -- Regards Miller ------------------------------------------------------------------------------ Chih-Zong Lin Email: dcp79807@csie.nctu.edu.tw Department of Computer Science and Information Engineering National Chiao-Tung University Hsinchu, Taiwan, Republic of China ------------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edsr!edsdrd!man@uunet.UU.NET (Marc Nurmi) Subject: Re: PARMACS I read your posting to comp.parallel regarding the availability of PARMACS V6.0. Do you have Internet addresses and/or phone numbers for PALLAS? Also, are there other tools that provide support for parallel distributed computing that you would recommend? Thanks, Marc Nurmi - man@edsdrd.eds.com ...!uunet!edsr!edsdrd!man Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: steved@inmos.co.uk (Stephen Doyle) Subject: Re: INMOS Information available on T9000 Organization: INMOS Limited, Bristol, UK References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> In article <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu (Lee W Campbell) writes: >In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: >> Some information regarding the IMS T9000 transputer is now available on the >> INMOS FTP server, ftp.inmos.co.uk [192.26.234.3], in the directory >> /inmos/info/T9000. This relates mainly to the superscalar processor and >> the cache memory system at present. > > The T9000 is the world's fastest single-chip computer, with > its 200 MIPS, 25 MFLOPS peak performance and its 32-bit > superscalar integer processor, 64-bit floating point unit, > virtual channel processor, 100Mbits/s communications ... > >World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, >R4000, supersparc, Pentium, and roughly comperable to a fast '486, so >how in the hell do they manage to call it "fastest'??? > If you read carefully the statement is "single-chip computer", this is emphasising that the T9000 design allows programs to run with no external support logic or memory. I.e. just using on-chip RAM and serial links to download code you can run an application, plainly not a large application but nevertheless an application. Now in practice of course usage in this way is likely to be rare but the concept remains the same that the T9000 requires minimal external support circuitry to function. Hence the MIPS per board area is extremely favourable compared with the chips you mention above. For example a 4MB T9000 HTRAM module is 55.5x90 mm whereas your bulk standard 486 motherboard is around 230x300mm (14 times larger). regards, Steve Steve Doyle, Software Marketing, INMOS Ltd | Tel +44 454 616616 1000 Aztec West | Fax +44 454 617910 Almondsbury | UK: steved@inmos.co.uk Bristol BS12 4SQ, UK | US: steved@inmos.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DKGCampb@cen.ex.ac.uk Subject: Re: The Future of Parallel Computing Organization: University of Exeter, UK. References: <1993Nov15.154454.14783@hubcap.clemson.edu> <1993Nov16.165313.15793@hubcap.clemson.edu> <1993Nov17.134155.24622@hubcap.clemson.edu> In article <1993Nov17.134155.24622@hubcap.clemson.edu> yali@ecn.purdue.edu (Yan Alexander Li) writes: >The discussions of this thread have focused on software vs. hardware issue. >The implied architecture is the homogeneous. I would like to suggest that we >consider heterogeneous processing also. Imagine that we can write a application >without worrying about the architecture. >The compiler will do the analysis and dispatch different part of the program to >the component (sub)systems that best suit them. > >A cluster of heterogeneous parallel >systems tightly coupled by highbandwidth network or simply a heterogeneous >parallel computer itself can exploit the heterogeneous architecture to cope with >different characteritics of different part of one application. This naturally >brings up the problem of standardization in algorithm description, high >level language, compiler, OS and other software issues. It is both hardware and >software issue, but IMHO the software people may have to make much more effort >to have this work. OK, so you have a het. arch. Now how are you going to predict performance? How are you going to determine whether your parallel software will run more efficiently on a different het. (or homogeneous) parallel arch.? -- Duncan Campbell Acknowledgement: I'd like to thank me, Department of Computer Science without whom none of this University of Exeter, would have been possible. Prince of Wales Road, Exeter EX4 4PT Tel: +44 392 264063 Telex: 42894 EXUNIV G Approved: parallel@hubcap.clemson.edu Path: bounce-back From: D.Lamptey@sheffield.ac.uk (D Lamptey) Newsgroups: comp.sys.transputer,comp.arch,comp.parallel Subject: Re: INMOS Information available on T9000 Followup-To: comp.sys.transputer,comp.arch,comp.parallel Organization: Academic Computing Services, Sheffield University References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> Lee W Campbell (elwin@media.mit.edu) wrote: : In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: : : For the fun of it I checked it out. What was available was a Q&A and a press : release. I got the press release. It's kind of amusing: (deletions) : The T9000 is the world's fastest single-chip computer, with : its 200 MIPS, 25 MFLOPS peak performance and its 32-bit : superscalar integer processor, 64-bit floating point unit, : virtual channel processor, 100Mbits/s communications ... : World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, : R4000, supersparc, Pentium, and roughly comperable to a fast '486, so : how in the hell do they manage to call it "fastest'??? : -- Often in error; Never in Doubt! elwin@media.mit.edu 617-253-0381 : Lee Campbell MIT Media Lab I am afraid, this time in error yet again :( I think the key-words here are "single-chip" . Have a read of the stuff again of what the chip has on it, compare it to other chips that have the same functionality, and then come and post your retraction/apology. I guess Thompson did not reply to you as he's rather busy at the moment. What you read probably went like: "high performance central processing unit (CPU),a 16 Kbyte cache, communications system and other support functions on a single chip. " etc, etc. (Rest for you to read) Derryck. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Murray Cole Subject: Re: The Future of Parallel Computing Organization: Department of Computer Science, University of Edinburgh References: <1993Nov11.142432.8924@hubcap.clemson.edu> <1993Nov15.154454.14783@hubcap.clemson.edu> <1993Nov16.165203.15351@hubcap.clemson.edu> <1993Nov17.134224.24886@hubcap.clemson.edu> In article <1993Nov17.134224.24886@hubcap.clemson.edu>, dbader@eng.umd.edu (David Bader) writes: > Murray, have you looked at Stone's book? A quick review follows. > -- definition of granularity in terms of Comp - Comms ratio Yes (great book!). My point is that communication (in whatever guise) is such a fundamental part of parallel algorithm design that it seems unreasonable for manufacturers to say "Here is our new parallel machine. It requires large grain computations" and then blame designers of parallel software/algorithms for a failure to "catch up" with them. The gap could be closed from two directions. I appreciate that I may be overstating this a bit (!), but then the prior debate seemed to have been heavily loaded in the other direction. Murray. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.parallel From: ucapcdt@ucl.ac.uk (Christopher David Tomlinson) Subject: Information on the Connection m/c Message-ID: <1993Nov17.150636.415092@ucl.ac.uk> Organization: Bloomsbury Computing Consortium I am looking for references/information on the connection machine (both cm1 and cm2) in particular anything regarding the design of the processing elements. I would be grateful if anybody could supply me with any leads. Thanks in advance Chris Tomlinson C.Tomlinson@ucl.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Dave Beckett Newsgroups: comp.parallel,comp.sys.transputer,comp.parallel.pvm Subject: [LONG] Transputer, occam and parallel computing archive: ADMIN Followup-To: comp.sys.transputer Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: Loads more files. See NEW FILES archive for details. Keywords: transputer, occam, parallel, archive This is the administrative information article for the Transputer, occam and parallel computing archive. Please consult the accompanying article for details of the new files and areas. In the last two weeks I've added another 9 megabytes of files to the archive at unix.hensa.ac.uk in /parallel. It currently contains over 66 Mbytes of freely distributable software and documents, in the transputer, occam and parallel computing subject area. STATISTICS ~~~~~~~~~~ >3410 users accessed archive (470 more than last time) >1580 Mbytes transfered (380MB more) since the archive was started in early May. Top 10 files accessed, excluding Index files 771 /parallel/README 384 /parallel/pictures/T9000-schematic.ps.Z 343 /parallel/reports/misc/soft-env-net-report.ps.Z 292 /parallel/documents/inmos/occam/manual3.ps.Z 221 /parallel/Changes 173 /parallel/reports/ukc/T9000-systems-workshop/all-docs.tar.Z 166 /parallel/software/folding-editors/origami.tar.Z 166 /parallel/index/ls-lR.Z 141 /parallel/faqs/parallel-C++-classes-1 131 /parallel/books/prentice-hall New this time: the first parallel C++ class faq. WHERE IS IT? ~~~~~~~~~~~~ At the HENSA (Higher Education National Software Archive) UNIX archive. The HENSA/UNIX archive is accessible via an interactive browsing facility, called fbr as well as email, DARPA ftp, gopher and NI-FTP (Blue Book) services. For details, see below. HOW DO I FIND WHAT I WANT? ~~~~~~~~~~~~~~~~~~~~~~~~~~ The files are all located in /parallel and each directory contains a short Index file of the contents. If you want to check what has changed in between these postings, look at the /parallel/Changes file which contains the new files added. There is also a full text index available of all the files in /parallel/index/FullIndex.ascii but be warned - it is very large (over 200K). Compressed and gzipped versions are in the same directory. For those UNIX dweebs, there are output files of ls-lR in /parallel/index/ls-lR along with compressed and gzipped versions too. HOW DO I CONTACT IT? ~~~~~~~~~~~~~~~~~~~~ There are several ways to access the files which are described below - log in to the archive to browse files and retrieve them by email; transfer files by DARPA FTP over JIPS or use Blue Book NI-FTP. Logging in: ~~~~~~~~~~~ JANET X.25 network: call uk.ac.hensa.unix (or 000049200900 if you do not have NRS) JIPS: telnet unix.hensa.ac.uk (or 129.12.21.7) Once connected, use the login name 'archive' and your email address to enter. You will then be placed inside the fbr restricted shell. Use the help command for up to date details of what commands are available. Transferring files by FTP ~~~~~~~~~~~~~~~~~~~~~~~~ DARPA ftp from JIPS/the internet: site: unix.hensa.ac.uk (or 129.12.21.7) login: anonymous password: Use the 'get' command to transfer a file from the remote machine to the local one. When transferring a binary file it is important to give the command 'binary' before initiating the transfer. For more details of the 'ftp' command, see the manual page by typing 'man ftp'. The NI-FTP (Blue Book) request over JANET path-of-file from uk.ac.hensa.unix Username: guest Password: The program to do an NI-FTP transfer varies from site to site but is usually called hhcp or fcp. Ask your local experts for information. Transferring files by Email ~~~~~~~~~~~~~~~~~~~~~~~~~~ To obtain a specific file email a message to archive@unix.hensa.ac.uk containing the single line send path-of-file or 'help' for more information. Browsing and transferring by gopher ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >From the Root Minnesota Gopher gopher, select the following entries: 8. Other Gopher and Information Servers/ 5. Europe/ 37. United Kingdom/ 15. HENSA unix (National software archive, University of Kent), (UK)/ 3. The UNIX HENSA Archive at the University of Kent at Canterbury/ 9. Parallel Archive/ and browse the archive as normal. [The numbers are very likely to change] The short descriptions are abbreviated to fit on an 80 column display but the long ones can always be found under 'General Information.' (the Index files). Updates to the gopher tree follow a little behind the regular updates. COMING SOON ~~~~~~~~~~~ A better formatted bibliograpy of the IOS press (WoTUG, NATUG et al) books. A HUGE bibliography of occam papers, PhD theses and publications - currently about 2000 entries. The rest of the INMOS archive server files. WoTUG related papers and information. NATUG information and membership form. A freely distributable occam compiler for workstations. A couple of free occam compiler for transputers. DONATIONS ~~~~~~~~~ Donations are very welcome. We do not allow uploading of files directly but if you have something you want to donate, please contact me. Dave Beckett Computing Laboratory, University of Kent at Canterbury, UK, CT2 7NF Tel: [+44] (0)227 764000 x7684 Fax: [+44] (0)227 762811 Email: djb1@ukc.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: chk@erato.cs.rice.edu (Charles Koelbel) Subject: High Performance Fortran Forum II Organization: Rice University, Houston CALL FOR PARTICIPATION HIGH PERFORMANCE FORTRAN FORUM II KICKOFF MEETING January 13-14, 1994 Wyndham Greenspoint Hotel Houston, Texas In 1992, the High Performance Fortran Forum (HPFF) conducted a series of meetings that led to the definition of High Performance Fortran (HPF). The HPF language has elicited wide comment, both positive and negative, and several vendors are now implementing compilers. In order to reach consensus on the language features, several important capabilities were explicitly not considered; these include parallel input and output, task parallelism, and support for complex data structures. Due to these limitations, we are tentatively planning a new series of HPFF meetings to take place in 1994. The purpose of the HPFF II kickoff meeting is to decide whether a new series of meetings is needed and, if so, what the scope of the new HPF should be. To this end, presentations by vendors, users, and researchers will consider experience with current HPF prototypes and needs for the future. The following speakers have been confirmed: Geoffrey Fox, Syracuse University John Levesque, Applied Parallel Research David Loveman, DEC Rob Schreiber, RIACS Guy Steele, Thinking Machines Joel Saltz, University of Maryland Henk Sips, Technical University Delft Speakers TBA, IBM & Portland Group Inc. Several other speakers are tentatively planned. The tentative schedule is: January 12 7:00pm-9:00pm Opening reception January 13 8:30am-5:00pm Vendor & user talks; discussion January 14 8:30am-noon User & research talks; planning for future The meeting will be held at: Wyndham Greenspoint Hotel 12400 Greenspoint Drive Houston, TX 77060 Phone: 1-800-822-4200 Fax: 713-875-4596 There will be a courtesy van from the airport to the hotel. A block of rooms has been reserved for the nights of January 12-14 at the rate of $70.00 per night. Please make your reservations directly with the hotel before December 29; mention ``HPFF II'' to get the special rate. The registration cost for the meeting is $75.00. This includes the reception on January 12, lunch on January 13, coffee breaks, and copies of the speakers' slides. If you plan to attend, please fill out and return the form below. Organizing Committee Ken Kennedy, HPFF Chair Charles Koelbel, HPFF I Executive Director Mary Zosel, HPFF II Executive Director ------------------------------------------------------------------------------- High Performance Fortran Forum II Kickoff Meeting Registration Form January 13--14, 1994 Wyndham Greenspoint Hotel 12400 Greenspoint Drive Houston, Texas 77060 Phone: 1-800-822-4200 Fax: 713-875-4596 Please return this form to Theresa Chatman tlc@cs.rice.edu Or send a hardcopy to: Theresa Chatman CITI/CRPC, Box 1892 Rice University Houston, TX 77251-1892 Fax: 713-285-5136 Name: Organization: Address: E-mail: Phone: Fax: I will attend the HPFF II Kickoff Meeting (YES or NO): If yes, dietary restrictions? Vegetarian? Kosher? Other (please specify)? Please enclose \$75.00 check or money order payable to Rice University if sending by US mail; otherwise, registration will be collected at the meeting. Please add me to the HPFF mailing list (YES or NO): Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: markm@ee.ubc.ca (mark milligan) Subject: Re: INMOS Information available on T9000 Organization: University of BC, Electrical Engineering References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> <1993Nov17.134548.26043@hubcap.clemson.edu> I have just heard that they are shipping small quantities of beta test chips that are still very buggy and not fully implemented yet. These beta chips only run at 15 Mhz a far cry from the promised 50 Mhz. The new estamated date for the 40 Mhz release is now 2Q 94, I remember that two years ago the were promising us that they could deliver the chips in 1Q 92. Im glad we never put an order in, we would still be waiting. :^) -- ------- Mark R. Milligan markm@ee.ubc.ca University of British Columbia Department of Electrical Engineering Telerobotics and Control Lab. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: khb@chiba.Eng.Sun.COM (chiba) Subject: Public commment period for X3H5 model document begins Date: 17 Nov 93 10:56:03 Organization: SunPro It is desireable for a wide cross section of the community to comment on proposed Standards. Accredited Standards Committee X3, Information Processing Systems News Release October 11, 1993 Proj. No. 737-D Reply to Lynn Barra 202-626-5738 75300.2665@compuserve.com X3 Announces the Public Review and Comment Period on X3.252.199x, Parallel Processing Model for High Level Programming Languages Washington, D. C. -- Accredited Standards Committee X3, Information Processing Systems announces the four-month public review and comment period on X3.252.199x Parallel Processing Model for High Level Programming Languages. The comment period extends from October 29, 1993 through February 26, 1994. The intent of this standard is to define parallel constructs which are portable and language independent. The standard provides a method of improving the performance of programs that can execute on parallel computing systems, without requiring that they do so. Therefore, all standard-conforming programs will be executable by a single process. This draft also defines parallelism in terms of nested structured constructs specifying where it is valid for parallel execution to occur and provides synchronization mechanisms for communication among processes participating in the execution of such a construct. The comment period ends on February 26, 1994. Please send all comments to X3 Secretariat, Attn: Lynn Barra, 1250 Eye Street NW, Suite 200, Washington, DC 20005-3922. Send a copy to American National Standards Institute, Attn: BSR Center, 11 West 42nd St 13th Floor, New York, NY 10036. Purchase this standard in hard copy from: Global Engineering Documents, Inc. 15 Inverness Way Englewood, CO 80112-5704 1-800-854-7179 (within USA) 714-979-8135 (outside USA) Single Copy Price: $35.00 International Price: $45.50 -- ---------------------------------------------------------------- Keith H. Bierman keith.bierman@Sun.COM| khb@chiba.Eng.Sun.COM SunPro 2550 Garcia MTV 12-40 | (415 336 2648) fax 964 0946 Mountain View, CA 94043 Copyright 1993 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mln@blearg.larc.nasa.gov (Michael L Nelson) Subject: A Comparison of Queueing, Cluster and Distributed Computing Systems Organization: NASA Langley Research Center, Hampton, VA USA The following report is now availabe via anonymous ftp: techreports.larc.nasa.gov/pub/techreports/larc/93/tm109025.ps.Z A Comparison of Queueing, Cluster and Distributed Computing Systems Joseph A. Kaplan (j.a.kaplan@larc.nasa.gov) Michael L. Nelson (m.l.nelson@larc.nasa.gov) NASA Langley Research Center October, 1993 Abstract Using workstations clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to har ness the collective computing power. In this paper, we compare a variety of cluster management and queueing systems: Distributed Queueing Systems (DQS), Condor, LoadLeveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE) and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems is made on the integral issues of clustered computing. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: conpar94@risc.uni-linz.ac.at (CONPAR 94) Subject: CFP: CONPAR 94 (Correction) Keywords: call for papers, parallel and vector processing Organization: RISC, J.K. University of Linz, Austria The previously posted call for papers was an old version that did not include the complete program committee. Please use and propagate this version. ----------------------------------------------------------------------------- CONPAR 94 - VAPP VI Johannes Kepler University of Linz, Austria September 6-8, 1994 Second Announcement and Call For Papers The past decade has seen the emergence of two highly successful series of CONPAR and of VAPP conferences on the subject of parallel processing. The Vector and Parallel Processors in Computational Sciene meetings were held in Chester (VAPP I, 1981), Oxford (VAPP II, 1984), and Liverpool (VAPP III, 1987). The International Conferences on Parallel Processing took place in Erlangen (CONPAR 81), Aachen (CONPAR 86) and Manchester (CONPAR 88). In 1990 the two series joined together and the CONPAR 90 - VAPP IV conference was organized in Zurich. CONPAR 92 - VAPP V took place in Lyon, France. The next event in the series, CONPAR 94 - VAPP VI, will be organized in 1994 at the University of Linz (Austria) from September 6 to 8, 1994. The format of the joint meeting will follow the pattern set by its predecessors. It is intended to review hardware and architecture developments together with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms and applications software on vector and parallel architectures. It is expected that the program will cover: * languages / software tools * automatic parallelization and mapping * hardware / architecture * performance analysis * algorithms * applications * models / semantics * paradigms for concurrency * testing and debugging * portability A special session will be organized on Parallel Symbolic Computation. The proceedings of the CONPAR 94 - VAPP VI conference are intended to be published in the Lecture Notes in Computer Science series by Springer Verlag. This conference is organized by GUP-Linz in cooperation with RISC-Linz, ACPC and IFSR. Support by GI-PARS, OCG, OGI, IFIP WG10.3, IEEE, ACM, AFCET, CNRS, C3, BCS-PPSG, SIG and other organizations is being negotiated. Schedule: Submission of complete papers and tuturials Feb 15 1994 Notification of acceptance May 1 1994 Final (camera-ready) version of accepted papers July 1 1994 Paper submittance: Contributors are invited to send five copies of a full paper not exceeding 15 double-spaced pages in English to the program committee chairman at: CONPAR 94 - VAPP VI c/o Prof. B. Buchberger Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Phone: +43 7236 3231 41, Fax: +43 7236 3231 30 Email: conpar94@risc.uni-linz.ac.at The title page should contain a 100 word abstract and five specific keywords. CONPAR/VAPP also accepts and explicitly encourages submission by electronic mail to conpar94@risc.uni-linz.ac.at. Submitted files must be either * in uuencoded (preferably compressed) DVI format or * in uuencoded (preferably compressed) Postscript format as created on most Unix systems by cat paper.ps | compress | uuencode paper.ps.Z > paper.uue Organising committee: Conference Chairman: Prof. Jens Volkert Honorary Chairman: Prof. Wolfgang Handler Program Chairman: Prof. Bruno Buchberger Members: Siegfrid Grabner, Wolfgang Schreiner Conference Address: University of Linz, Dept. of Computer Graphics and Parallel Processing (GUP-Linz), Altenbergerstr. 69, A-4040 Linz, Austria Tel.: +43-732-2468-887 (885), Fax.: +43-732-2468-10 Email: conpar94@gup.uni-linz.ac.at Program committee: Chairman: Bruno Buchberger (Austria) Makoto Amamiya (Japan), Francoise Andre (France), Marco Annaratone (USA), P.C.P. Bhatt (India), Dario Bini (Italy), Arndt Bode (Germany), Kiril Boyanov, Helmar Burkhart (Switzerland), Cristina Coll (Spain), Michel Cosnard (France), Frank Dehne (USA), Mike Delves (UK), Ed F. Deprettere (The Netherlands), Jack Dongarra (USA), Iain Duff (UK), Klaus Ecker (Germany), John P. ffitch (UK), Rolf Fiebrich (USA), Ian Foster (USA), Geoffrey Fox (USA), Christian Fraboul (France), Wolfgang Gentzsch (Germany), Thomas Gross (USA), Gaetan Hains (Canada), Guenter Haring (Austria), Hiroki Honda (Japan), Hoon Hong (Austria), F. Hossfeld (Germany), Roland N. Ibbett (UK), Chris Jesshope (UK), Harry Jordan (USA), Peter Kacsuk (Hungary), Erich Kaltofen (USA), Hironori Kasahara (Japan), Wolfgang Kleinert (Austria), Wolfgang Kuechlin (Germany), Otto Lange (Germany), Michael A. Langston (USA), Allen D. Malony (USA), Alfonso Miola (Italy), Nikolay Mirenkov (Japan), Yoichi Muraoka (Japan), David A. Padua (USA), Cherri Pancake (USA), Dennis Parkinson (UK), Guy-Rene Perrin (France), Ron Perrot (UK), Bernard Philippe (France), Brigitte Plateau (France), Ramon Puigjaner (Spain), Michael J. Quinn (USA), Gerard L. Reijns (The Netherlands), Karl-Dieter Reinartz (Germany), Dirk Roose (Belgium), Bl. Sendov (Bulgaria), Othmar Steinhauser (Austria), Ondrej Sykora (Slovakia), Denis Trystram (France), Marco Vanneschi (Italy), Paul Vitanyi (The Netherlands), Jens Volkert (Austria), R. Wait (UK), Paul S. Wang (USA), Peter Zinterhof (Austria) Reply Form: We encourage you to reply via e-mail, giving us the information listed below. If you do not have the possibility to use e-mail, please copy the form below and send it to the conference address. CONPAR 94 - VAPP VI Reply Form Name:..................................First Name................Title......... Institution:................................................................... Address:....................................................................... Telephone:.....................Fax:...........................E-Mail:.......... Intentions (please check appropriate boxes) o I expect to attend the conference o I wish to present a paper o I wish to present at the exhibition (industrial / academic) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stefis@zeno.informatik.uni-mannheim.de (Stefan Fischer) Subject: KSR locks and barriers Organization: A poorly-installed InterNetNews site Hi, recently, we made some measurements on a KSR with 32 processors. We used a variable number of threads, from about 5 to about 65. In all cases, we used two versions of the parallel program, one with explicit use of lock and condition variables, and one with barrier synchronization. We were a bit amazed by the results: (a) In thee case of less than 32 threads, barrier synchronization was always faster than lock synchronization (about 15-20%). (b) When we had more threads than processors, the performance of barrier sync. decreased dramatically, while the speedup of the lock version in relation to the sequential version still increased. Is this (b) a known effect, and if so, do you have an explanation for it? Stefan -- Stefan Fischer Universitaet Mannheim email: stefis@pi4.informatik.uni-mannheim.de Praktische Informatik IV tel : +49 621 292 1407 Seminargebaeude A5, C117 68131 Mannheim Germany fax : +49 621 292 5745 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dowd@acsu.buffalo.edu (Patrick Dowd) Subject: CFP - SIGCOMM'94 Reply-To: dowd@eng.buffalo.edu Organization: State University of New York at Buffalo Call for Papers ACM SIGCOMM'94 CONFERENCE Communications Architectures, Protocols and Applications University College London London, UK August 31 to September 2, 1994 (Tutorials and Workshop, August 30) An international forum on communication network applications and technologies, architectures, protocols, and algorithms. Authors are invited to submit full papers concerned with both theory and practice. The areas of interest include, but are not limited to: -- Analysis and design of computer network architectures and algorithms, -- Innovative results in local area networks, -- Mixed-media networks, -- High-speed networks, routing and addressing, support for mobile hosts, -- Resource sharing in distributed systems, -- Network management, -- Distributed operating systems and databases, -- Protocol specification, verification, and analysis. A single-track, highly selective conference where successful submissions typically report results firmly substantiated by experiment, implementation, simulation, or mathematical analysis. General Chair: Jon Crowcroft, University College London Program Chairs: Stephen Pink, Swedish Institute of Computer Science Craig Partridge, BBN Publicity Chair: Patrick Dowd, State University of New York at Buffalo Local Arrangements Chair: Soren-Aksel Sorensen, University College London Papers must be less than 20 double-spaced pages long, have an abstract of 100-150 words, and be original material that has not been previously published or be currently under review with another conference or journal. In addition to its high quality technical program, SIGCOMM '94 will offer tutorials by noted instructors such as Paul Green and Van Jacobson (tentative), and a workshop on distributed systems led by Derek McAuley. Important Dates: Paper submissions: 1 February 1994 Tutorial proposals: 1 March 1994 Notification of acceptance: 2 May 1994 Camera ready papers due: 9 June 1994 All submitted papers will be judged based on their quality and relevance through double-blind reviewing where the identities of the authors are withheld from the reviewers. Authors names should not appear on the paper. A cover letter is required that identifies the paper title and lists the name, affiliation, telephone number, email, and fax number of all authors. Authors of accepted papers need to sign an ACM copyright release form. The Proceedings will be published as a special issue of ACM SIGCOMM Computer Communication Review. The program committee will also select a few papers for possible publication in the IEEE/ACM Transactions on Networking. Submissions from North America should be sent to: Craig Partridge BBN 10 Moulton St Cambridge MA 02138 All other submissions should be sent to: Stephen Pink Swedish Institute of Computer Science Box 1263 S-164 28 Kista Sweden Five copies are required for paper submissions. Electronic submissions (uuencoded, compressed postscript) should be sent to each program chair. Authors should also e-mail the title, author names and abstract of their paper to each program chair and identify any special equipment that will be required during its presentation. Due to the high number of anticipated submissions, authors are encouraged to strictly adhere to the submission date. Contact Patrick Dowd at dowd@eng.buffalo.edu or +1 716 645-2406 for more information about the conference. Student Paper Award: Papers submitted by students will enter a student-paper award contest. Among the accepted papers, a maximum of four outstanding papers will be awarded full conference registration and a travel grant of $500 US dollars. To be eligible the student must be the sole author, or the first author and primary contributor. A cover letter must identify the paper as a candidate for this competition. Mail and E-mail Addresses: General Chair Jon Crowcroft Department of Computer Science University College London London WC1E 6BT United Kingdom Phone: +44 71 380 7296 Fax: +44 71 387 1397 E-Mail: J.Crowcroft@cs.ucl.ac.uk Program Chairs Stephen Pink (Program Chair) Swedish Institute of Computer Science Box 1263 S-164 28 Kista Sweden Phone: +46 8 752 1559 Fax: +46 8 751 7230 E-mail: steve@sics.se Craig Partridge (Program Co-Chair for North America) BBN 10 Moulton St Cambridge MA 02138 Phone: +1 415 326 4541 E-mail: craig@bbn.com Publicity Chair Patrick Dowd Department of Electrical and Computer Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260-2050 Phone: +1 716 645 2406 Fax: +1 716 645 3656 E-mail: dowd@eng.buffalo.edu Local Arrangements Chair Soren-Aksel Sorensen Department of Computer Science University College London London WC1E 6BT United Kingdom Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: barmar@Think.COM (Barry Margolin) Subject: Re: AI and Parallel Machines Date: 18 Nov 1993 23:15:50 GMT Organization: Thinking Machines Corporation, Cambridge MA, USA References: <1993Nov5.153315.2624@unocal.com> Nntp-Posting-Host: telecaster.think.com In article <1993Nov5.153315.2624@unocal.com> stgprao@st.unocal.COM (Richard Ottolini) writes: >In article <1993Nov4.152811.24420@hubcap.clemson.edu> angelo@carie.mcs.mu.edu (Angelo Gountis) writes: >>I am looking for references regrading the impact parallel processing has >>had on projects involving AI. >Thinking Machines started as an A.I. company. >They are one of the more successful parallel computing companies. >There customer base is more scientific computing these days. But not totally. Here's part of a recent article about work done at one of our customer sites (extracted from HPCwire): PSC Researcher Wins AI Award for CM-2 Translation Program Pittsburgh, Pa. -- Hiroaki Kitano imagines a society in which hand-held computers will allow native and foreign speakers to converse on any street corner. Just say a sentence in English and out come the appropriate words in French. Kitano, a computer scientist specializing in artificial intelligence (AI), is bypassing traditional AI methods to tackle the translation challenge. And his efforts, which involved the Connection Machine CM-2 at the Pittsburgh Supercomputing Center, has earned him the most prestigious award in artificial intelligence for researchers under 35. He received the Computers and Thought Award Tuesday at the 13th International Joint Conference on Artificial Intelligence in Chambery, France. The article goes on to say that he made use of a memory-based approach to the implementation. Memory-based reasoning (in which the program builds up a database of past solutions, and looks for close matches when trying to solve new problems) has been applied in many CM applications to AI; it's one of the approaches that's only currently feasible on an MPP. A number of Thinking Machines technical reports related to AI applications have been published. For specific references, get the file think/trs/pub.list_abs, which contains information about all our published technical reports. Look for entries containing "KRNL" (Knowledge Representation and Natural Languages), "vision", "learning", or "natural language" in their Subject lines. MPP technology has also been applied to the study of real intelligence. One of our customers (sorry, I forget which) has studied scans of the visual cortex in animals when it's responding to visual stimuli, and then developed models of it on the CM. I hope this post doesn't seem too biased or commercial; I'm sure people have been doing AI research on other parallel processors, but these are just the things I'm familiar with (I'm not a researcher, so I don't follow the field in general). -- Barry Margolin System Manager, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: co1dll@ss5.sheffield.ac.uk (D Lamptey) Newsgroups: comp.sys.transputer,comp.arch,comp.parallel Subject: Re: INMOS Information available on T9000 Followup-To: comp.sys.transputer,comp.arch,comp.parallel Organization: Academic Computing Services, Sheffield University References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> <1993Nov17.134249.25077@hubcap.clemson.edu> John Corb (root@acid.unitedkingdom.NCR.COM) wrote: : In article <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu (Lee W Campbell) writes: : >In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: : : > The T9000 is the world's fastest single-chip computer, with : > : >how in the hell do they manage to call it "fastest'??? : they used sneaky wording - "single-chip computer" : ^^^^^^^^ sneaky wording is often a matter of how well you read something. Just like "small-print". This print was not small, though. : the alpha, pa-risc etc. are microprocessors, the t9000 is a microcomputer as : it has cpu+memory+i/o all on chip, so it is a lot faster than 8051, z8 etc. : (but them so's my pocket calculator :) : they are trying to hype it as fast and it was the best they could come up with, : it's a shame 'cos the t9000 is actually quite slick, sad huh? It is not sad, and not hype. It is true. Transputers have traditionally been very powerful and cost effective in embedded type systems, where: The systems are often highly parallel (transputer cant be beaten) The transputer concept is cryingly close to the way parallel real-time systems are specified and designed. The chip count needs to be low (How does one sound?) The power consumption also needs to be low. (3 watts - 5 watts) On a price to performance ratio for highly scalable systems, the transputer is a strong contender. The alphas, etc have different strong points, i.e brutal number-crunching,( and egg-frying). In my usual objectiveness, I shall have to say that INMOS have'nt yet been able to deliver on their promises and a lot of people have made decisions to jump ship, because there are other devices offering performance on par with the t9 specs (or better). But for embedded type systems, we are yet to see anything els overtake the t9. Derryck, p.s. For an idea of what can be done with transputers, have a look round for the paper "Transputers on the Road" arising out the the WTC congress in Aachen by the Daimler Benz group. It is about a transputer system (around 19 t800 transputers) doing real time road-scene analysis, pattern and road-sign recognition, vehicle tracking and a whole lot more Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: brown@reed.edu (C. Titus Brown) Newsgroups: comp.parallel,comp.unix.osf.osf1 Subject: Alpha chips in parallel/OS to go with Date: 19 Nov 1993 10:58:11 GMT Organization: Reed College, Portland, OR I'm looking into getting a large number of Alpha chips to do some parallel computing. Does anyone know of any alternatives to DECs Farm 1000 rack? Does anyone have experience with the Farm 1000 rack? In addition, if anyone knows of a stripped down OS that will work on the Alpha chip or could be ported to the Alpha chip, so that we don't have to keep OSF/1 in memory on each chip, please let me know... I suppose I should give a general idea of the requirements: relatively low amount of communication between the processors (on the order of regular Ethernet); relatively low need for disk space on each node/processor; PVM should work. Currently we're also looking into a company called 'Avalon' (?) which has something called the Q-board. This apparently runs only under OpenVMS when attached to a VAX; does anyone know if Avalon or another company has this available for OSF/1? Thanks, --Titus -- "Never put off until tomorrow, that which can be done the day after tomorrow" -- C. Titus Brown, anonymous student, brown@reed.edu Meddle not in the affairs of dragons, for you are crunchy and good with ketchup. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: D.Lamptey@sheffield.ac.uk (D Lamptey) Subject: Re: INMOS Information available on T9000 Organization: Finnish Academic and Research Network Project - FUNET Hi, John Corb wrote: In article <1993Nov16.151841.3483@hubcap.clemson.edu> elwin@media.mit.edu (Lee W Campbell) writes: >In article <1993Nov11.213059.26255@hubcap.clemson.edu>, thompson@inmos.co.uk () writes: > > The T9000 is the world's fastest single-chip computer, with > its 200 MIPS, 25 MFLOPS peak performance and its 32-bit > superscalar integer processor, 64-bit floating point unit, > virtual channel processor, 100Mbits/s communications ... > >World's fastest WHAT? Slower than Alpha, HP PA-RISC, RS/6000, R3000, >R4000, supersparc, Pentium, and roughly comperable to a fast '486, so >how in the hell do they manage to call it "fastest'??? they used sneaky wording - "single-chip computer" ^^^^^^^^ the alpha, pa-risc etc. are microprocessors, the t9000 is a microcomputer as it has cpu+memory+i/o all on chip, so it is a lot faster than 8051, z8 etc. (but them so's my pocket calculator :) they are trying to hype it as fast and it was the best they could come up with, it's a shame 'cos the t9000 is actually quite slick, sad huh? ---- End included message ---- Sneaky wording does depend on how well the document is read. The transputer is undoubtedly ahead when it comes to the areas of stuff like distributed/embedded/real-time control. Because Parallelism is easily expressed. Low chip count -> higher reliability, etc. Low power consumption. (3 watts to around 5) The devices we are talking about have different strengths. Transputers are better suited to real-time/embedded type systems and the others not so much. You would not use a transputer as a compute-engine if there was an alpha available. (And you can't fry eggs on a transputer) That aside, for highly parallel highly scalable systems, transputers provide a very favourable option in terms of performance/price. Those interested in the truth are referred to a paper arising out of the Aachen WTC congress in september by the Daimler Benz group called "Transputers on the Road" featuring a transputer -based system for autonomous driving (up to 90mph) with real-time object recognition and avoidance, lane tracking, road-sign recognition and a lot more. (It was something like 19 t800 transputers). That aside, INMOS have not delivered on their promises as yet. (2 yrs late?) This has resulted in a great erosion of support for their machines, especially as other devices of similar functionality (maybe more costly) have shown up. But for the embedded market, the transputer still remains one of the best suited. Derryck. Newsgroups: news.announce.conferences Path: risc.uni-linz.ac.at!conpar94 From: conpar94@risc.uni-linz.ac.at (CONPAR 94) Subject: CFP: CONPAR 94 (Correction) Followup-To: comp.parallel Keywords: call for papers, parallel and vector processing Sender: netnews@risc.uni-linz.ac.at (Netnews SW Account) Nntp-Posting-Host: melmac.risc.uni-linz.ac.at Organization: RISC, J.K. University of Linz, Austria Date: Thu, 18 Nov 1993 09:11:20 GMT Apparently-To: uunet!comp-parallel The previously posted call for papers was an old version that did not include the complete program committee. Please use and propagate this version. ----------------------------------------------------------------------------- CONPAR 94 - VAPP VI Johannes Kepler University of Linz, Austria September 6-8, 1994 Second Announcement and Call For Papers The past decade has seen the emergence of two highly successful series of CONPAR and of VAPP conferences on the subject of parallel processing. The Vector and Parallel Processors in Computational Sciene meetings were held in Chester (VAPP I, 1981), Oxford (VAPP II, 1984), and Liverpool (VAPP III, 1987). The International Conferences on Parallel Processing took place in Erlangen (CONPAR 81), Aachen (CONPAR 86) and Manchester (CONPAR 88). In 1990 the two series joined together and the CONPAR 90 - VAPP IV conference was organized in Zurich. CONPAR 92 - VAPP V took place in Lyon, France. The next event in the series, CONPAR 94 - VAPP VI, will be organized in 1994 at the University of Linz (Austria) from September 6 to 8, 1994. The format of the joint meeting will follow the pattern set by its predecessors. It is intended to review hardware and architecture developments together with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms and applications software on vector and parallel architectures. It is expected that the program will cover: * languages / software tools * automatic parallelization and mapping * hardware / architecture * performance analysis * algorithms * applications * models / semantics * paradigms for concurrency * testing and debugging * portability A special session will be organized on Parallel Symbolic Computation. The proceedings of the CONPAR 94 - VAPP VI conference are intended to be published in the Lecture Notes in Computer Science series by Springer Verlag. This conference is organized by GUP-Linz in cooperation with RISC-Linz, ACPC and IFSR. Support by GI-PARS, OCG, OGI, IFIP WG10.3, IEEE, ACM, AFCET, CNRS, C3, BCS-PPSG, SIG and other organizations is being negotiated. Schedule: Submission of complete papers and tuturials Feb 15 1994 Notification of acceptance May 1 1994 Final (camera-ready) version of accepted papers July 1 1994 Paper submittance: Contributors are invited to send five copies of a full paper not exceeding 15 double-spaced pages in English to the program committee chairman at: CONPAR 94 - VAPP VI c/o Prof. B. Buchberger Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Phone: +43 7236 3231 41, Fax: +43 7236 3231 30 Email: conpar94@risc.uni-linz.ac.at The title page should contain a 100 word abstract and five specific keywords. CONPAR/VAPP also accepts and explicitly encourages submission by electronic mail to conpar94@risc.uni-linz.ac.at. Submitted files must be either * in uuencoded (preferably compressed) DVI format or * in uuencoded (preferably compressed) Postscript format as created on most Unix systems by cat paper.ps | compress | uuencode paper.ps.Z > paper.uue Organising committee: Conference Chairman: Prof. Jens Volkert Honorary Chairman: Prof. Wolfgang Handler Program Chairman: Prof. Bruno Buchberger Members: Siegfrid Grabner, Wolfgang Schreiner Conference Address: University of Linz, Dept. of Computer Graphics and Parallel Processing (GUP-Linz), Altenbergerstr. 69, A-4040 Linz, Austria Tel.: +43-732-2468-887 (885), Fax.: +43-732-2468-10 Email: conpar94@gup.uni-linz.ac.at Program committee: Chairman: Bruno Buchberger (Austria) Makoto Amamiya (Japan), Francoise Andre (France), Marco Annaratone (USA), P.C.P. Bhatt (India), Dario Bini (Italy), Arndt Bode (Germany), Kiril Boyanov, Helmar Burkhart (Switzerland), Cristina Coll (Spain), Michel Cosnard (France), Frank Dehne (USA), Mike Delves (UK), Ed F. Deprettere (The Netherlands), Jack Dongarra (USA), Iain Duff (UK), Klaus Ecker (Germany), John P. ffitch (UK), Rolf Fiebrich (USA), Ian Foster (USA), Geoffrey Fox (USA), Christian Fraboul (France), Wolfgang Gentzsch (Germany), Thomas Gross (USA), Gaetan Hains (Canada), Guenter Haring (Austria), Hiroki Honda (Japan), Hoon Hong (Austria), F. Hossfeld (Germany), Roland N. Ibbett (UK), Chris Jesshope (UK), Harry Jordan (USA), Peter Kacsuk (Hungary), Erich Kaltofen (USA), Hironori Kasahara (Japan), Wolfgang Kleinert (Austria), Wolfgang Kuechlin (Germany), Otto Lange (Germany), Michael A. Langston (USA), Allen D. Malony (USA), Alfonso Miola (Italy), Nikolay Mirenkov (Japan), Yoichi Muraoka (Japan), David A. Padua (USA), Cherri Pancake (USA), Dennis Parkinson (UK), Guy-Rene Perrin (France), Ron Perrot (UK), Bernard Philippe (France), Brigitte Plateau (France), Ramon Puigjaner (Spain), Michael J. Quinn (USA), Gerard L. Reijns (The Netherlands), Karl-Dieter Reinartz (Germany), Dirk Roose (Belgium), Bl. Sendov (Bulgaria), Othmar Steinhauser (Austria), Ondrej Sykora (Slovakia), Denis Trystram (France), Marco Vanneschi (Italy), Paul Vitanyi (The Netherlands), Jens Volkert (Austria), R. Wait (UK), Paul S. Wang (USA), Peter Zinterhof (Austria) Reply Form: We encourage you to reply via e-mail, giving us the information listed below. If you do not have the possibility to use e-mail, please copy the form below and send it to the conference address. CONPAR 94 - VAPP VI Reply Form Name:..................................First Name................Title......... Institution:................................................................... Address:....................................................................... Telephone:.....................Fax:...........................E-Mail:.......... Intentions (please check appropriate boxes) o I expect to attend the conference o I wish to present a paper o I wish to present at the exhibition (industrial / academic) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Subject: HPCN Europe 1994, Second Call for Papers From: SONDCRAY@HASARA11.SARA.NL Organization: S.A.R.A. Academic Computing Services Amsterdam 2nd CALL for PAPERS HPCN Europe 1994 The European Conference and Exhibition on High-Performance Computing and Networking Munich, Germany HPCN Europe 1994 is scheduled from April 18-20, 1994 at the Sheraton Hotel Munich, Germany. The conference is supported by an advisory board consisting of over 70 leading experts and representatives from the CEC, the Rubbia Committee, The European Industrial Initiative Ei3, the scientific and research community as well as most of the computer manufacturers. It is accompanied by the largest European HPCN Exhibition. The conference is sponsored by the Commission of the European Communities and Royal Dutch Fairs. The aim of the conference and exhibition is to offer to the HPCN community the European forum for high-performance computing and networking with focus on real industrial and scientific applications. The conference will consist of the following sessions: 1.Engineering Applications 2.Computational Chemistry and Biology 3.Computational Physics 4.Seismic Applications 5.Environmental Applications 6.Embedded Parallel Systems 7.Commercial Applications 8.Computer Science Aspects 9.Novel Applications 10.Storage Technology 11.Networking 12.European Activities 13.Panels on Actual Topics In addition, a vendor session featuring product presentations from vendors participating in the exhibition will take place. Conference Chairpersons are Prof. Wolfgang Gentzsch, FH Regensburg, Prof. Dorte Olsen, University of Copenhagen, Prof. Bob Hertzberger, University of Amsterdam. For the HPCN Europe 1994 conference, key note presentations and invited lecturers by leading scientists and users from all over the world are scheduled. Until now, the following experts have agreed to present an invited talk: Dr.W. Brandstaetter, AVL Graz: The FIRE code on Parallel Computers Mr.H. Forster, CEC Brussels: The European Framework-4 Programme Mr.R. Herken, mental images Berlin: Parallel High-Quality Image Rendering Dr.M. Hillmann, INPRO Berlin: Simulation of Metal Forming Processes Dr.B. Madahar, GEC Marconi, Hirst Research Center: HPC Applications in Image Processing and Embedded Systems Dr.J. Murphy, British Aerospace Bristol: Requirements in the European Aerospace Industry Prof.N. Petkov, University of Groningen: Computational Neuro Science Prof.Y. Robert, Ecole Nationale Superieure de Lyon: Parallelizing Compilers Prof.H. van der Vorst, University of Utrecht: Highly Parallel Hybrid Biconjugate Gradient Methods Prof.H. Zima, University of Vienna: Advanced Implementation Techniques for High Performance FORTRAN Mr. C. Skelton, ICL Manchester: Parallel Information Processing Applications In addition, Technology Demonstrators will demonstrate real applications on real parallel machines. It is illustrated how the experience gained in universities and research labs concerning algorithms, parallel programming and porting of codes to parallel computers, can be transferred directly into industry. PAPER SUBMISSION Potential speakers are invited to submit papers to the Conference Secretariat at the address given below. Extended abstracts (2 pages minimum) are acceptable, full papers have our preference. The abstracts/papers will be refereed by the programme committee. Papers are expected to be submitted on scientific, industrial and commercial applications and networking, on tools, languages and algorithms (see the sessions mentioned above). Extended abstracts and full papers should contain: title, authors, full address, telephone, fax number, E-mail address, introduction, description, results, discussion, conclusions The original and three copies must be sent to the Conference Secretariat. The deadline for paper submissions is November 30, 1993. PROPOSE A POSTER Authors preferring an informal, interactive presentation of results may submit a proposal for a poster. The deadline for poster proposals is November 30, 1993. PARTICIPATION IN THE TECHNOLOGY DEMONSTRATORS DISPLAY Central in this Technology Demonstrators Display, are HPCN centers in their role as technology transfer focal points. They are asked, together with the vendors and developers, to design demonstrations related to the conference. A high-speed network giving access to all available machines necessary for demonstrations purposes will be installed. Vendors who have real systems running at the fair as well as HPCN research centers willing to demonstrate and run real applications, are requested to contact the Technology Demonstrators Display Chair Ad Emmen (Tel: +31 20 5923 000). The deadline is November 30, 1993. Registration Fee: Early registration before March 1, 1994: DM 480,- (research), DM 750,- (industry) Late registration after March 1, 1994 DM 580,- (research), DM 850,- (industry) Special registration fee on a per day basis and for students will also be possible. More detailed information concerning registration, accomodation, transportation, etc. may be obtained after January 1, 1994 through the Conference Secretariat. For information about the HPCN Europe 1994 Conference please contact: HPCN Europe 1994, Conference Secretariat Prof. Wolfgang Gentzsch Erzgebirgstrasse 2 D-93073 Neutraubling Tel. +49 9401 9200 0 Fax. +49 9401 9200 92 email: mbox@genias.de -- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Wolfgang Gentzsch GENIAS Software GmbH Erzgebirgstr. 2; D-93073 Neutraubling; Germany Phone: +49 9401 9200-0 | FAX: +920092 | e-mail gent@genias.de - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: nagrwb@vax.oxford.ac.uk (Richard W Brankin) Subject: Job Opportunity-NAG, Oxford, UK Organization: Oxford University VAX 6620 Readers of this group may be interested in a job opportunity at NAG Ltd. Please see misc.jobs.offered, uk.jobs.offered for details. -- ** R.W. Brankin -- Numerical Algorithms Group Ltd. -- nagrwb@vax.ox.ac.uk ** ** suggestions for sig. welcome -- will need include ** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tavanapo@cs.ucf.edu (Wallapak Tavanapong) Subject: nCUBE file access Organization: University of Central Florida Hi I'm doing an experiment about parallel algorithms on nCUBE 2 machine, and I need to read and write data to different files which reside on different nCUBE disks in paralell. I used xnc -l to load a program onto a subcube, and I wanted to distribute data files onto different nCUBE disks; however, what I got is those files were written on the host node disk. I would like to know whether there is a way I can do that. My experiment depends on paralell file accesses, so it's important to me to figure out the way. If you have any suggestions, please let me know. Thank you very much for your help Sincerely, Wallapak Tavanapong Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kaaza@Informatik.TU-Muenchen.DE (Kaaz) Subject: parallel RPC Organization: TU M"unchen (FRG) Hi everybody I'm trying to develop and implement a Client-Server-Modell based on RPC (transport mechanism TCP!) for parallel programming environments (like PVM, P4, NX). With an ordinary RPC the server gets one request, process it, an sends back a reply to that Client. But unfortunately this is not enough for my purposes. Under certain circumstances the server first must get requests by ALL clients involved in a parallel application (e.g. to check for consistence of the passed arguments by each client), process the according procedure, and than he should send back replies to ALL clients. Because the need of TCP I'm not able to use something like "callback broadcasting". Does anybody has any clue or a hint, where to find articles about it? thanks, Andre Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: arajakum@mtu.edu (Rajakumar) Subject: Re:The future of parallel computing Organization: Michigan Technological University I have been following the thread with rapt attention :-) and would like to contibute my two cents worth... A few weeks back, I saw a magazine ( I forget which ) where a graph of the supercomputer industry was plotted. The graph was flat for the past few years. Now, that got me thinking. The need for parallel computing was fuelled by military research. Now that the superstructure is being dismantled, the market for MPP especially is shrinking. Doesn't this mean that the future of parallel computing lies in distrubuted processing rather than in MPP? Forgive me my wet-behind-the-ears attitude, but I guess I want reassurance that the field I want to be in for the rest of my research life will not disappear somewhere down the line. -- -->AR<-- Anthony Rajakumar arajakum@mtu.edu Grad Student Michigan Tech Houghton MI49931. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DSPWorld@world.std.com (Amnon Aliphas) Subject: Wanted: ICSPAT '94 Technical Review Committee Members Organization: The World Public Access UNIX, Brookline, MA We are seeking committee members to assist in reviewing papers submitted for inclusion in ICSPAT '94. The International Conference on Signal Processing Applications and Technology. The Fifth Annual ICSPAT will convene in Texas in the fall of 1994. The members of the committee should be design, development, application engineers with a strong background in DSP technology. Last month, ICSPAT'93 in Santa Clara, California, included close to 400 papers in a range of application areas including: audio and speech; automotive; biomedical, communications and telephony, consumer applications, digital filters, DSP algorithms, image analysis and coding, image processing, instrumentation and testing, multimedia, neural networks, parallel processing, etc. This is a volunteer position. Members of the Technical Review Committee will be invited to participate in the conference and will receive consideration to act as ICSPAT Session Chairpersons. (There's some work involved, but there is great potential for glory!). We plan to select the members of the Technical Review Committee by 26 November 1993. If you are interested in being considered for the Review Committee, please contact us by mail, e-mail, or fax, as soon as possible. Attn: review committee Selection phone: 617-964-3817 DSP Associates fax: 617-969-6689 18 Peregrine Road email:DSPWorld@world.std.com Newton Centre, MA 02159 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.lang.fortran,comp.sys.super From: forge@netcom.com (FORGE Customer Support) Subject: Info on APR's FORGE Products Available by Anon FTP Followup-To: comp.parallel Keywords: APR FORGE Parallelizers FTP Organization: Applied Parallel Research, Inc. Date: Fri, 19 Nov 1993 21:24:55 GMT Apparently-To: comp-parallel@uunet.uu.net Due to the great interest in our Fortran parallelizing and analysis tools at SuperComputing 93 and from email, we have made information regarding our products available by anonymous FTP. Very soon, we will be distributing evaluation copies of our products by this same anonymous FTP scheme. After downloading and installing the software you would then have to call us to obtain a license key to enable the software for a fixed 15 or 30 day trial. But more about that later. To access our product information files, do the following: 1) FTP to netcom.com logging in as anonymous and give your own email address as a password. 2) change directory to pub/forge/ProductInfo 3) use get file_name to download to your machine the ascii text files in this directory. 4) close FTP by typing quit at the prompt. Here is a sample session: ------------------------------------------------------------------- amber<12>% ftp netcom.com Connected to netcom.com. 220 netcom FTP server (SunOS 4.1) ready. Name (netcom.com:forge): anonymous 331 Guest login ok, send ident as password. Password:xxxxxxxxxxxx 230 Guest login ok, access restrictions apply. ftp> cd pub/forge/ProductInfo 250 CWD command successful. ftp> dir 200 PORT command successful. 150 ASCII data connection for /bin/ls (192.100.81.107,3045) (0 bytes). total 72 -rwxr-xr-x 1052 Nov 19 19:41 README -rwxr-xr-x 10233 Nov 19 19:31 dpf.man.txt -rwxr-xr-x 8125 Nov 19 19:31 dpf_datasheet.txt -rwxr-xr-x 8813 Nov 19 19:31 forge90_datasheet.txt -rwxr-xr-x 7258 Nov 19 19:31 forgex_datasheet.txt -rwxr-xr-x 7273 Nov 19 19:31 magic_datasheet.txt -rwxr-xr-x 7074 Nov 19 19:31 news.txt -rw------- 1923 Nov 19 20:47 pricing.txt -rwxr-xr-x 11160 Nov 19 19:31 xhpf.man.txt -rwxr-xr-x 6580 Nov 19 19:31 xhpf_datasheet.txt 226 ASCII Transfer complete. 713 bytes received in 0.3 seconds (2.3 Kbytes/s) ftp> get README 200 PORT command successful. 150 ASCII data connection for README (192.100.81.107,3114) (1052 bytes). 226 ASCII Transfer complete. local: README remote: README 1089 bytes received in 0.023 seconds (46 Kbytes/s) ftp> quit ------------------------------------------------------------------- The README file describes the files in this directory. It was a pleasure meeting many of you at SC 93 in Portland this week. Stay tuned to comp.{parallel,lang.fortran,sys.super} for more information. -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kelly@pdx154.intel.com (J.Kelly Flanagan) Subject: Available MP Machines Sender: news@ichips.intel.com (News Account) Organization: Intel Corp., Hillsboro, Oregon Date: Sat, 20 Nov 1993 02:12:41 GMT Apparently-To: uunet.uu.net!comp-parallel I am interested in a survey of existing commercial multiprocessor machines. If this has been covered here before please let me know where the FAQ is and I'll go away. If it hasn't been please let me know what companies and products are available and I will summarize to the net if there is interest. I am interested in shared memory, distributed memory, you name it I'll listen :-) Kelly -- ========================================================================== J. Kelly Flanagan ON SABBATICAL @ INTEL J. Kelly Flanagan Computer Science Dept. 9/93 through 4/94 Intel Corp. Brigham Young University | JF1-91 3372 TMCB | 5200 NE Elam Young Parkway PO BOX 26576 | Hillsboro, OR 97124 Provo, UT 84602-6576 | | kelly@cs.byu.edu | kelly@ichips.intel.com work: (801)-378-6474 | work: 503-696-4117 fax: (801)-378-7775 | home: 503-693-1130 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: menes@statistik.tu-muenchen.de (Rainer Menes) Subject: Re: INMOS Information available on T9000 Followup-To: comp.sys.transputer,comp.arch,comp.parallel Sender: news@sunserver.lrz-muenchen.de (Mr. News) Organization: IAMS References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> <1993Nov17.134249.25077@hubcap.clemson.edu> <1993Nov19.133529.9033@hubcap.clemson.edu> Date: Sat, 20 Nov 1993 18:03:17 GMT Apparently-To: comp-parallel@news.Germany.EU.net In article <1993Nov19.133529.9033@hubcap.clemson.edu>, co1dll@ss5.sheffield.ac.uk (D Lamptey) wrote: . stuff deleted . > It is not sad, and not hype. It is true. Transputers have traditionally > been very powerful and cost effective in embedded type systems, where: > That right. I like to use transputer for realtime control. I don't know any other systems which gives me such flexible eviroment. > The systems are often highly parallel (transputer cant be beaten) > The transputer concept is cryingly close to the way parallel > real-time systems are specified and designed. > Not only for realtime application the transuters are very good. For parallel programing for mathematical programs or something like that, we most the time get 70 - 90% speedup with our programs. I don't know what other parallel architectures offer, but yesterday I made a parallel version of a big program in just 8 hours with C and VCR. The speedup is 85% to the unparallel version. We now will be able to use up to 129 transputers. Very fast and I doubt that a Alpha box is faster. I have to proff this next week. > The chip count needs to be low (How does one sound?) > > The power consumption also needs to be low. (3 watts - 5 watts) > > On a price to performance ratio for highly scalable systems, the transputer > is a strong contender. The alphas, etc have different strong points, i.e > brutal number-crunching,( and egg-frying). > > In my usual objectiveness, I shall have to say that INMOS have'nt yet been > able to deliver on their promises and a lot of people have made decisions > to jump ship, because there are other devices offering performance on par > with the t9 specs (or better). But for embedded type systems, we are yet to > see anything els overtake the t9. > Hm we do alot realtime controlling with transputers, and the only transputer which meets our needs are the T2 series of transuters. The 32 bit T4 and T8 are too expensive for most realtime needs. (I mean to produce several hunderts systems a year and not special machines like down below "Transputers on the Road"). Also performance differs not to much as long you don't have to do floating point. A T2 25MHz is for IO things as fast as a 25MHz T8 or very close, but coasts four times the price. I doubt INMOS will be able to sell the T9 for a price where the chip is very interesting for realtime controlling in massproduction. A chip price about 20$ - 50$ is what you could pay, and not 300$ - 600$. Look at the T8 the prices haven't changed very mutch in the last years. This is something I don't understand. I think the T9 is a death end for INMOS. They better should have produced new chips with better performance, and stepwise make a chip which meets the T9 idea. "Rom wasn't build in a day", so the T9, but 2 - 3 years of waiting for a chip is unparalleled after its orginal annonce date. What about a T900 which is pin compatible with the T800 and has a twice as fast execition timing and some improfments of the T9000 project. This makes money keeps most of us happy and helps INMOS to control there process for production and learn step by step. > Derryck, > > p.s. For an idea of what can be done with transputers, have a look round > for the paper "Transputers on the Road" arising out the the WTC congress > in Aachen by the Daimler Benz group. It is about a transputer > system (around 19 t800 transputers) doing real time road-scene > analysis, pattern and road-sign recognition, vehicle tracking and > a whole lot more Rainer, P.S: I still enjoy programing transputers and will continue to use the for our projects, but the marketing at INMOS is the stupid. I not talking about the designer and programers, only the marketing is bad. -------------------------------------------------------------------------- Rainer Menes menes@statistik.tu-muenchen.de Transputer Development Group IAMS - Technical University of Munich Tel: 49 89 2105 8226 Fax: 49 89 2105 8228 ------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rms@well.sf.ca.us (richard marlon stein) Subject: Re: The future of parallel computing Message-ID: Sender: news@well.sf.ca.us Nntp-Posting-Host: well.sf.ca.us Organization: The Whole Earth 'Lectronic Link, Sausalito, CA References: <1993Nov19.210140.22782@hubcap.clemson.edu> Date: Sat, 20 Nov 1993 16:50:07 GMT In article <1993Nov19.210140.22782@hubcap.clemson.edu>, Rajakumar wrote: > I have been following the thread with rapt attention :-) and would like >to contibute my two cents worth... > >military research. Now that the superstructure is being dismantled, the market >for MPP especially is shrinking. Doesn't this mean that the future of parallel >computing lies in distrubuted processing rather than in MPP? >Anthony Rajakumar arajakum@mtu.edu >Grad Student >Michigan Tech >Houghton MI49931. > A wholesomely perceptive observation. With the military budget declining, HPC vendors must slug it out in the market place with PCs and other systems. Clearly, businesses want machines that are cost-effective and enchance productivity, which is the motivation behind most uses of computers in the first place. The HPC vendors should try to leverage their scalable technology down to the desktop. Anyone for a personal parallel computer with 8 or 16 cpus? My guess is that KSR All-Cache (tm) should do pretty well here. But the biggest problem with the HPC vendors having been driven by military computing requirements in the past is the reach for teraflop, not system balance. Hence, you've got machines that can compute a ton but really suck at I/O, which is two thirds of any machine's purpose. IMHO, I believe that HPC manufacturers should foot the bill for I/O enchancement, afterall they've had plenty of time and funding from the feds to build the machines correctly in the first place. -- Richard Marlon Stein, Internet: rms@well.sf.ca.us To those who know what is not known; The Chronicles of Microwave Jim! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: The future of parallel computing Date: 20 Nov 1993 22:53:41 GMT Organization: Professional Student, University of Maryland, College Park References: <1993Nov19.210140.22782@hubcap.clemson.edu> Nntp-Posting-Host: mountaindew.eng.umd.edu In article <1993Nov19.210140.22782@hubcap.clemson.edu> arajakum@mtu.edu (Rajakumar) writes: >Now, that got me thinking. The need for parallel computing was fuelled by >military research. Now that the superstructure is being dismantled, the market >for MPP especially is shrinking. Doesn't this mean that the future of parallel >computing lies in distrubuted processing rather than in MPP? Hi, I would say that there is a broad range of problems which fuel the MPP market which are not defense applications. NSF is reponsible for the major funding of academic purchases of MPP machines. NSF has also released its list of "Grand Challenge" problems which will need MPP technology. Some of these applications are global modeling, genetic matching, management of extremely large databases, real-time image processing, etc. etc. Because NSF is funding university research projects and the purchase of certain MPP machines (such as the MasPar and CM-5), I think that we might not be heading in the absolute best direction for this field. (And that is why I wrote the initial article called "The Future of Parallel Computing.") To followup on this thread, I claim that the feedback from scientists and researchers to the MPP industry is not being heard, mainly because the market is being artificially supported by NSF. That is, we need to unify our modeling of parallel algorithms, including our view of the machine, and the functionality of the compiler, instead of getting faster hardware thrown at us. So, to answer your questions, "yes" - I believe that even with the cutback of military projects, there will be a need for experts in the field of parallel computing using MPP (as well as distributed processing). Thanks, david David A. Bader Electrical Engineering Department University of Maryland College Park, MD 20742 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: manu@CS.UCLA.EDU (Maneesh Dhagat) Subject: Supercomputer Sale to China Nntp-Posting-Host: oahu.cs.ucla.edu Organization: UCLA, Computer Science Department Date: Sun, 21 Nov 93 03:46:41 GMT Apparently-To: comp-parallel@uunet.uu.net Hi, recently, on the news, there's been a mention of the US selling a $8 mill. supercomputer to China. Does anyone know which machine this is? Please post, or send email to manu@cs.ucla.edu Thanks. -- --------------------------------------------------- Maneesh Dhagat (manu@cs.ucla.edu) University of California, Los Angeles, CA 90024 --------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Federico Silveira Subject: ***MUSTEK PrintScan 105*** Organization: Cornell University Sender: fs16@cornell.edu (Verified) Nntp-Posting-Host: operators3.cit.cornell.edu X-Useragent: Nuntius v1.1.1d24 X-Xxdate: Sat, 20 Nov 93 04:22:11 GMT I have a MUSEK PrintScan 105 handheld scanner for sale. I purchased in August but I haven't used it much. This lack of use is why I am selling it. I is in mint condition and comes with everything it came with and a Scan Align Pad which I purchased to accompany it. I has spent most of it's time in the box and not in use. I comes with Perceive OCR, Rainbow Paintshop, and MUSTEK's own scan utilities for Windows. I am asking $135.00 or Best offer. I have more information if wanted. Federico Silveira fs16@cornell.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tskrishn@rs6000.cmp.ilstu.edu (T.S.V. Krishnan) Subject: PVM_LIKE_Software_for_PCs To: comp-parallel@vixen.cso.uiuc.edu Date: Sun, 21 Nov 1993 16:03:55 -0600 (CST) X-Mailer: ELM [version 2.4 PL23] Content-Type: text Content-Length: 930 I am working on my Master's thesis " Distributed Load Balancing using Artificial Intelligence". I need some type of simulation or a program which would be like PVM or similar to that to link various workstations in a Local Area Network. I plan to use one of the workstations (PC) as a master and link 3 or more workstations (PCs) in such a way that the master can have all the control over the slave workstations. The idea is run any task allocation program using this program, such that the master can call different workstations and allocate task to them, the slaves in turn would return back the result of the process to the master. If anyone has some information on how to do this, or if there exists any simulator or software which would allow me to do this, please let me know. Any help in this regard is highly appriciated. Please send your suggestion, to tskrishn@rs6000.cmp.ilstu.edu. Thanks Krish Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Hugo.Embrechts@cs.kuleuven.ac.be (Hugo Embrechts) Subject: share a hotel room at 5th IEEE SPDP, Dallas TX, 1-4 dec Sender: news@cs.kuleuven.ac.be Nntp-Posting-Host: bach.cs.kuleuven.ac.be Organization: Applied Math Division, Computer Science Dept., Katholieke Universiteit Leuven, Belgium Date: Mon, 22 Nov 1993 09:35:37 GMT Apparently-To: comp-parallel@Belgium.EU.net I'm looking for a person to share a hotel room at the 5th IEEE conference on Parallel and Distributed Processing, held at Dallas, TX, 1-4 dec. Who is interested can reply me at hugo@cs.kuleuven.ac.be. Thanks, Hugo Embrechts, Dept. of Computer Science, Leuven, Belgium Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.arch,comp.parallel From: djc@cam-orl.co.uk (David J Clarke) Subject: Re: INMOS Information available on T9000 Message-ID: <1993Nov23.154301.10318@infodev.cam.ac.uk> Sender: news@infodev.cam.ac.uk (USENET news) Nntp-Posting-Host: pepper.cam-orl.co.uk Organization: Olivetti Research Ltd References: <1993Nov11.213059.26255@hubcap.clemson.edu> <1993Nov16.151841.3483@hubcap.clemson.edu> <1993Nov17.134249.25077@hubcap.clemson.edu> <1993Nov19.133529.9033@hubcap.clemson.edu> <1993Nov22.135659.3969@hubcap.clemson.edu> Date: Tue, 23 Nov 1993 15:43:01 GMT Here is another Engineer who swears by (and sometines at) Transputers for Embedded Control. The realtime reponse is unchallenged simply because the whole machine is designed with that in mind. Forget software timing loops. Forget polled i/o. This is for real. One line of Occam gets you in or out without hanging the processor waiting for the world to catch up. Speak out, Harware types. Don't be put down by people who think that because there is a harder way, it must be better! Dave Clarke Olivetti Research Limited Cambridge, England Disclaimer: The views expressed are my own, not necessarily those of my employer or all of my colleages. -DC Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: coker@cherrypit.princeton.edu (David A. Coker) Subject: Re: Supercomputer Sale to China Originator: news@nimaster Sender: news@princeton.edu (USENET News System) Nntp-Posting-Host: cherrypit.princeton.edu Organization: Princeton University References: <1993Nov22.135728.4241@hubcap.clemson.edu> Date: Mon, 22 Nov 1993 14:11:48 GMT Apparently-To: comp-parallel%rutgers.edu@phoenix.Princeton.EDU I believe the computer being sold to China is a Cray. David -- ___________________________________________________________ |o| |o| | | David A. Coker | | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: announce@PARK.BU.EDU Subject: Faculty position in Cognitive and Neural Systems at Boston University NEW SENIOR FACULTY IN COGNITIVE AND NEURAL SYSTEMS AT BOSTON UNIVERSITY Boston University seeks an associate or full professor starting in Fall 1994 for its graduate Department of Cognitive and Neural Systems. This Department offers an integrated curriculum of psychological, neurobiological, and computational concepts, models, and methods in the fields of neural networks, computational neuroscience, and connectionist cognitive science in which Boston University is a leader. Candidates should have an international research reputation, preferably including extensive analytic or computational research experience in modeling a broad range of nonlinear neural networks, especially in one or more of the areas: vision and image processing, visual cognition, spatial orientation, adaptive pattern recognition, and cognitive information processing. Send a complete curriculum vitae and three letters of recommendation to Search Committee, Department of Cognitive and Neural Systems, Room 240, 111 Cummington Street, Boston University, Boston, MA 02215. Boston University is an Equal Opportunity/Affirmative Action employer. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lukowicz@ira.uka.de (Paul Lukowicz ) Subject: Re: KSR locks and barriers Date: 22 Nov 1993 14:36:53 GMT Organization: University of Karlsruhe, FRG References: <1993Nov19.133446.8779@hubcap.clemson.edu> Nntp-Posting-Host: i41s3.ira.uka.de Sender: newsadm@ira.uka.de > We were a bit amazed by the results: > >(a) In thee case of less than 32 threads, barrier synchronization was > always faster than lock synchronization (about 15-20%). >(b) When we had more threads than processors, the performance of > barrier sync. decreased dramatically, while the speedup of the > lock version in relation to the sequential version still > increased. > >Is this (b) a known effect, and if so, do you have an explanation for it? -- We have observed the same effect while implementing the KSR-1 backend of our Modula-2* compiler. I think that this is due to the fact that barrier synchronisation uses buisy polling with high priority. Once one of the threads on a processor enters a barrier it uses up a lot of computing time for the polling, preventing the other threads from doing anything usefull. Paul +--------------------------------------------------------+ | Paul Lukowicz (email: lukowicz@ira.uka.de) | | Institut fuer Programmstrukturen und Datenorganisation | | Fakultaet fuer Informatik, Universitaet Karlsruhe | | Postfach 6980, W-7500 Karlsruhe 1, Germany | | (Voice: ++49/(0)721/6084386, FAX: ++49/(0)721/694092) | +--------------------------------------------------------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 22 Nov 93 09:58:31 -0500 From: announce@PARK.BU.EDU Subject: Graduate study in Cognitive and Neural Systems at Boston University *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: cns@cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: patel@cis.ohio-state.edu (kant c patel) Newsgroups: comp.parallel,comp.sys.super Subject: Router Implementations Date: 22 Nov 1993 11:25:05 -0500 Organization: The Ohio State University Dept. of Computer and Info. Science Hi, I am studying wormhole router implementations, and am looking for some information on the hardware implementation of the router in the Intel Paragon, i.e., things like the types of decisions that are made at each router as the messages proceed through the network, and how these are actually implemented in hardware, header processing and flit transmission times, some hardware details of implementation of the switch, etc. I don't know if this kind of information would actually be available for unlimited release, but I would really appreciate any pointers to whatever information has been released by the company. Thanks, Kant (patel@cis.ohio-state.edu) ______________________________________________________________________________ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gabi@gup.uni-linz.ac.at (Gabriele Kotsis) Subject: cf participation Date: 22 Nov 1993 17:48:38 GMT Organization: ACE, Uni Wien Reply-To: gabi@gup.uni-linz.ac.at Nntp-Posting-Host: heraklit.ani.univie.ac.at X-Charset: ASCII X-Char-Esc: 29 CALL FOR PARTICIPATION Minisymposium on Performance Prediction of Parallel Programs OCG Sitzungssaal, Wollzeile 1-3, 1010 Vienna, Austria December 9th, 1993 8.45- 9.00 Welcome 9.00-10.00 Gianfranco Balbo, Universita di Torino Performance Evaluation and Concurrent Programming 10.00-11.00 Alan Wagner, University of British Columbia Performance Issues in the Design of Task-Oriented Templates 11.00-11.30 coffee break 11.30-12.30 Arjan J.C. van Gemund, Delft University of Technology PAMELA: A Performance Modeling Methodology 12.30-13.00 R. Kolmhofer, Universit"at Linz Measurement of the nCUBE Communication Behavior 13.00-14.30 lunch 14.30-15.30 Rosemary Candlin, University of Edinburgh Estimating Performance from the Macroscopic Properties of Parallel Programs 15.30-16.30 Ulrich Herzog, Universit"at Erlangen-N"urnberg Constructive Modelling with Stochastic Process Algebras 16.30-17.00 coffee break 17.00-18.00 Umakishore Ramachandran, Gorgia Institute of Technology In Search of a Crystal-ball: Application meets Architecture 18.00-18.30 H. Gietl, nCUBE Generic Algorithms 18.30-19.00 G"unter Haring, Universit"at Wien; Jens Volkert, Universit"at Linz Performance Prediction Within CAPSE Organizers: Prof. G. Haring (haring@ani.univie.ac.at) Institut f"ur Angewandte Informatik und Informationssysteme Abteilung Advanced Computer Engineering Universit"at Wien Lenaugasse 2/8, 1080 Wien, "Osterreich (Vienna, Austria) Tel. +43 1 408 63 66 10 and Prof. J. Volkert (jv@gup.uni-linz.ac.at Institut f"ur Graphische und Parallele Datenverarbeitung Universit"at Linz Altenbergerstra"se 69, 4040 Linz, "Osterreich Tel. +43 732 2468 888 How to register: Please, fill in and return the enclosed registration form to Gabriele Kotsis gabi@ani.univie.ac.at Registration fee is AS 300,- (payment will be made on site) and includes refreshments during the coffee breaks and a copy of the abstracts of the talks. ******************************************************************* Last Name: _____________________________ First Name: ________________________ Organization: __________________________________________ Address: _____________________________________________ _____________________________________________ City, State,Zip/Country:__________________________________ Phone: ___________________, Fax: ____________________ E-mail:_______________________________ Registration Fee: AS 300,- ******************************************************************* Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: desj@ccr-p.ida.org (David desJardins) Newsgroups: comp.parallel,comp.lang.fortran Subject: Re: REALIGN & REDISTRIBUTE! Date: 22 Nov 1993 14:38:37 -0500 Organization: IDA Center for Communications Research, Princeton References: <1993Nov17.134320.25278@hubcap.clemson.edu> Chih-Zong Lin writes: > In the definition of High Performance Fortran, > REALIGN and REDISTRIBUTE are provided to change data allocation dynamically. > > Is there any real applications that is suitable to use these directives? Sure. An easy example is a matrix multiply routine. This might be highly optimized but only if the matrices are in a particular relative layout. Since matrix multiplication is probably order N^3 (unless n is very large) and realignment is only N^2, it makes sense to realign the matrices for the execution of the subroutine even if this provides only a relatively small increase in the speed of execution. In general, any routine which does a large amount of work relative to the size of its arguments, is probably going to benefit from redistributing them when necessary. David desJardins Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DKGCampb@cen.ex.ac.uk Subject: Re: The future of parallel computing Organization: University of Exeter, UK. References: <1993Nov19.210140.22782@hubcap.clemson.edu> Date: Mon, 22 Nov 1993 17:35:04 GMT Apparently-To: hypercube@hubcap.clemson.edu Sender: D.K.G.Campbell@exeter.ac.uk In article <1993Nov19.210140.22782@hubcap.clemson.edu> arajakum@mtu.edu (Rajakumar) writes: > A few weeks back, I saw a magazine ( I forget which ) where a graph of >the supercomputer industry was plotted. The graph was flat for the past few years. >Now, that got me thinking. The need for parallel computing was fuelled by >military research. Now that the superstructure is being dismantled, the market >for MPP especially is shrinking. Doesn't this mean that the future of parallel >computing lies in distrubuted processing rather than in MPP? Well, quoting from "Computing" on 28 October 1993, p8: "Analysts predict large-scale growth in the massively parallel market between now and 2000. In particular, banks and other large, IT-dependent organisations are looking to massively parallel systems to administer the huge databases they rely on to store customer details, stock lists and other critical information." So, some people do think there to be a future in MPP. -- Duncan Campbell Acknowledgement: I'd like to thank me, Department of Computer Science without whom none of this University of Exeter, would have been possible. Exeter EX4 4PT Tel: +44 392 264063 Telex: 42894 EXUNIV G Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Dik.Winter@cwi.nl (Dik T. Winter) Subject: Re: Supercomputer Sale to China Sender: news@cwi.nl (The Daily Dross) Nntp-Posting-Host: boring.cwi.nl Organization: CWI, Amsterdam References: <1993Nov22.135728.4241@hubcap.clemson.edu> Date: Mon, 22 Nov 1993 22:16:17 GMT Apparently-To: comp-parallel@NL.net In article <1993Nov22.135728.4241@hubcap.clemson.edu> manu@CS.UCLA.EDU (Maneesh Dhagat) writes: > recently, on the news, there's been a mention of > the US selling a $8 mill. supercomputer to China. > Does anyone know which machine this is? > According to my paper it is not the US that sells but Cray Research Inc. -- dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland home: bovenover 215, 1025 jn amsterdam, nederland; e-mail: dik@cwi.nl Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: raman@umiacs.umd.edu (Dr. Rajeev Raman) Subject: Re: The future of parallel computing Date: 22 Nov 1993 17:35:18 -0500 Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 References: <1993Nov19.210140.22782@hubcap.clemson.edu> <1993Nov22.135716.4149@hubcap.clemson.edu> In article <1993Nov22.135716.4149@hubcap.clemson.edu> dbader@eng.umd.edu (David Bader) writes: >In article <1993Nov19.210140.22782@hubcap.clemson.edu> arajakum@mtu.edu (Rajakumar) writes: >>Now, that got me thinking. The need for parallel computing was fuelled by >>military research. Now that the superstructure is being dismantled, the market >>for MPP especially is shrinking. Doesn't this mean that the future of parallel >>computing lies in distrubuted processing rather than in MPP? > >Hi, > I would say that there is a broad range of problems which fuel the >MPP market which are not defense applications. If I may add a (somewhat naive) observation to David's commentary --- while it's often stated that uniprocessor power has been increasing by 50-100% a year for the last few years, the equally remarkable fact to me is how the need for such fast "general-purpose" computing has grown as well. The SPARC 2's which were such hot stuff two years ago are now pretty passe. I think if cheap processing power is available for general purpose computing it will be used. How? Engineers will do ever more accurate computations, vision and robotics people will process larger images at finer resolution, financial analysts will make ever more elaborate market models and so on. One doesn't have to look too far outside the traditional computing community to see the need (i.e. we don't necessarily need Grand Challenges.) Where will this extra speed come from? The uniprocessor speed increase cannot keep going much longer. (a) physical limitations will soon kick in; (b) I don't see another innovation like RISC around the corner (even RISC really added only a factor of say two or four to computing speeds, most of the improvement since then has come from improvements in the technology of chip manufacturing). (c) profit margins on large powerful chips will keep declining -- already the Pentium is nowhere near as profitable as the 486. I expect that uniprocessors will cease to command huge price premiums in 4-5 years, and that manufacturers will make money by selling them in huge quantities at low cost. (R & D to make faster chips will soon be at the point of diminishing returns.) Hopefully this will put MPP back into contention and the hardware manufacturers will be more interested in manufacturing MPP-friendly chips (with hardware support for context switching etc). rr Rajeev Raman raman@umiacs.umd.edu UMIACS, A.V. Williams Bldg, University of Maryland, College Park, MD 20742 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: khb@chiba.Eng.Sun.COM (Keith Bierman-khb@chiba.eng.sun.com::SunPro) Subject: Re: Supercomputer Sale to China Date: 22 Nov 93 17:08:32 Organization: SunPro Message-ID: References: <1993Nov22.135728.4241@hubcap.clemson.edu> NNTP-Posting-Host: chiba In-reply-to: manu@CS.UCLA.EDU's message of Sun, 21 Nov 93 03:46:41 GMT In article <1993Nov22.135728.4241@hubcap.clemson.edu> manu@CS.UCLA.EDU (Maneesh Dhagat) writes: recently, on the news, there's been a mention of the US selling a $8 mill. supercomputer to China. Does anyone know which machine this is? The local newspaper claimed a machine from CRI. However, it was unclear which of the several models it could have been. -- ---------------------------------------------------------------- Keith H. Bierman keith.bierman@Sun.COM| khb@chiba.Eng.Sun.COM SunPro 2550 Garcia MTV 12-40 | (415 336 2648) fax 964 0946 Mountain View, CA 94043 Copyright 1993 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: davidhan@cs.unsw.oz.au (David Han) Date: Tue, 23 Nov 93 14:36:25 +1100 Subject: ICCI'94 INFORMATION Dear Sir, Can you send me information of ICCI'94 ? Thank you. -- _--_|\ David Han (davidhan@cs.unsw.oz.au) / \ +61-2-663-4576 (fax) \_.--._* MAIL: AI Lab, Com. Sci., Uni. NSW, v PO BOX 1, Kensington, N.S.W. 2033 SYDNEY, AUSTRALIA Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djpatel@chaph.usc.edu (Dhiren Jeram Patel) Subject: Does CM-5 do any prefetching? Date: 22 Nov 1993 23:36:30 -0800 Organization: University of Southern California, Los Angeles, CA Sender: djpatel@chaph.usc.edu Keywords: CM-5 Thinking Machines Subject line says it all. I'd appreciate any info on any prefetching mechanisms in the CM-5. Thanks a bunch... Dhiren Patel [^_^] Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: nsrcjl@leonis.nus.sg (Jerry Lim Jen Ngee) Subject: Call for Papers - Journal of NSRC Date: 23 Nov 1993 08:11:02 GMT Organization: National University of Singapore Nntp-Posting-Host: leonis.nus.sg X-Newsreader: TIN [version 1.2 PL0] The second issue of High Performance Computing, the Journal of NSRC, will be published in July 1994. Researchers and users in the field of high performance computing are invited to send in their papers for consideration. Submission for the second issue should not be later than end February 1994. Scope of Journal The journal will focus on the applications aspects of high performance computing, ie, the use of supercomputers, massively parallel computers, and clusters of heterogeneous computers to solve scientific, engineering, and business problems. For further information, please contact : The Editor Attn: Mrs Evelyn Lau National Supercomputing Research Centre National University of Singapore 81 Science Park Drive #04-03 The Chadwick Singapore Science Park Singapore 0511 Tel : (65) 77 89 080 Fax : (65) 77 80 522 Email : songjj@nsrc.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mehlhaus@ira.uka.de (Ulrich Mehlhaus) Subject: supercomputer applications Date: 23 Nov 1993 11:53:31 GMT Organization: University of Karlsruhe, FRG Nntp-Posting-Host: i60s33.ira.uka.de Sender: newsadm@ira.uka.de Hy, I'm looking for typical applications requiring the usage of multi-processor computers. I've just scanned the timeline of developments in parallel computing (by Greg Wilson) and found some machines offering thousends of processors. I can imagine some applications (like simulation in physics or of dynamic flow analysis) but are there some typical ones, which hardly require such an amount of cpu's? I'm especially interested in publications describing the usage of these type of computers to fulfill a very complex task. Thanks in advance Uli ------------------------------------------------------------------------------ Ulrich Mehlhaus Institute for Real-Time Computer Systems and Robotics University of Karlsruhe Kaiserstrasse 12 P.O.Box 6980 D-76128 Karlsruhe email: mehlhaus@ira.uka.de tel: xx/49/721/6084243 fax : xx/49/721/606740 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Damal Arvind Subject: Final CFP: ACM/IEEE/SCS 8th Workshop on Par. and Dist. Simulation Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh Date: Tue, 23 Nov 1993 12:57:21 GMT Apparently-To: comp-parallel@uknet.ac.uk ============================================= Final Call For Papers ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation University of Edinburgh, Scotland, U.K. July 6-8 1994, ============================================= Sponsors: ACM Special Interest Group on Simulation (SIGSIM), IEEE Computer Society Technical Committee on Simulation (IEEE-TCSIM), and Society for Computer Simulation (SCS) Topics: PADS provides a forum for presenting recent results in the simulation of large and complex systems by exploiting concurrency. The scope of the conference includes, but is not limited to: * Algorithms, and methods for concurrent simulation (e.g. optimistic, conservative, discrete, continuous, event-driven, oblivious) * Programming paradigms for concurrent simulation (e.g. object-oriented, logic, functional) * Models of concurrent simulation (e.g. stochastic, process algebraic, temporal logic) * Performance evaluation (both theoretical and experimental) of concurrent simulation systems * Special purpose concurrent simulation (e.g. multiprocessor architectures, distributed systems, telecommunication networks, VLSI circuits, cache simulations) * Relationship of concurrent simulation and underlying architecture (e.g. SIMD and MIMD machines, geographically distributed computers, tightly-coupled multiprocessors) Schedule: Deadline for Paper submission : December 1, 1993 Notification of acceptance : March 1, 1994 Camera ready copy due by : April 15, 1994. Invited Speaker : LEONARD KLEINROCK (Los Angeles, USA) General Chair : Rajive Bagrodia (Los Angeles, USA) Local Arrangements: Monika Lekuse (Edinburgh, U.K.) Program Co-chairs D. K. Arvind Jason Yi-Bing Lin Department of Computer Science, Bellcore, University of Edinburgh, MRE 2D-297 Mayfield Road, 445 South Street Edinburgh EH9 3JZ, U.K. Morristown, NJ 07962, USA. dka@dcs.ed.ac.uk liny@thumper.bellcore.com Voice: +44 31 650 5176 Voice: +1 (201) 829-5095 Fax: +44 31 667 7209 Fax: +1 (201) 829-5886 Program Committee I. Akyildiz (Atlanta, USA) A. Greenberg (Bell Laboratory, USA) R. Ayani (Kista, Sweden) P. Heidelberger (IBM, USA) F. Baiardi (Pisa, Italy) C. Lengauer (Passau, Germany) M. Bailey* (Tucson, USA) D. Nicol* (Williamsburg, USA) S. Balsamo (Pisa, Italy) T. Ott (Bellcore, USA) H. Bauer (Munich, Germany) B. Preiss (Waterloo, Canada) R. Fujimoto* (Atlanta, USA) S. Turner (Exeter, UK) * Member of the Steering Committee\\ Send e-mail to D. K. Arvind (dka@dcs.ed.ac.uk) for inclusion in the PADS electronic mailing list. Submissions: Prospective authors should submit six copies of the paper written in English and not exceeding 5000 words to either one of the Program Co-chairs. Papers must be original and not submitted for publication elsewhere. Each submission should include the following in a cover sheet: short abstract, contact person for correspondence, postal and e-mail addresses. To ensure blind reviewing, authors' names and affiliations should appear only on the cover sheet. Bibliographic references should be modified so as not to compromise the authors' identity. Papers submitted by electronic mail will not be considered. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: gottlieb@allan.ultra.nyu.edu (Allan Gottlieb) Newsgroups: comp.arch,comp.parallel Subject: Re: Information on the Connection m/c Date: 23 Nov 1993 13:49:15 GMT Organization: New York University, Ultracomputer project References: <1993Nov17.150636.415092@ucl.ac.uk> Nntp-Posting-Host: allan.ultra.nyu.edu In-Reply-To: ucapcdt@ucl.ac.uk's message of Fri, 19 Nov 1993 13:33:22 GMT In article <1993Nov17.150636.415092@ucl.ac.uk> ucapcdt@ucl.ac.uk (Christopher David Tomlinson) writes: I am looking for references/information on the connection machine (both cm1 and cm2) in particular anything regarding the design of the processing elements. I would be grateful if anybody could supply me with any leads. Thanks in advance Chris Tomlinson C.Tomlinson@ucl.ac.uk OK. Here is the cm-1/cm-2 section of the 2nd edition of almasi and gottlieb, highly parallel computing, benjamin-cummings pub. .*Copyedits done by allan in short hills NJ :fig id=cmchip frame=none .ce on .si CM1 width 3.50i depth 3.40i .sp 0.3i .si ALU2 width 4.00i depth 2.30i .ce off :figcap.The CM1 Connection Machine. .in 2 nobreak The upper part of this figure shows the CM1's external host and microcontroller, and one of the 4096 nodes that are connected into a 12-dimensional hypercube. Each node has four memory chips, a processor chip holding 16 processors, and a router connected to the hypercube network. The lower part of the figure shows one of the simple bit-serial processors as it performs the ADD operation. :efig. .hy off :h3.CM-1 Connection Machine (MIT/TMC) .hy on :spot id=cm1. .pi/Connection Machine .pi/CM1 .pi ref /data-parallel /SIMD (and SPMD) .pi ref /control-parallel /MIMD :p. The CM-1 Connection Machine :bibref refid=hill85. was a collection of 65,536 1-bit processors designed primarily for massively parallel solutions to artificial intelligence problems, although its manufacturer, Thinking Machines Corp., also demonstrated its use on other applications, such as database search. Like ILLIAC IV and GF11, it was an SIMD computer, driven by a conventional computer that acts as host and runs the main program that contains all the intelligence in the system. But unlike those :q.number crunchers:eq., the CM-1 was a :q.symbol cruncher:eq. designed to answer questions more like :q.Is Clyde an elephant?:eq. than :q.What proton mass does QCD predict?:eq.. It was designed for problems that can benefit from parallel pattern-matching search algorithms rather than algorithms that are heavy in floating-point computations. Floating-point computations are handled much better by the subsequent CM-2 and CM-5 offerings, which are discussed on page :spotref refid=cm2. and page :spotref refid=cm5., respectively. :p. Unlike the ILLIAC IV or GF11, the CM-1 uses a large number of small processors. Indeed, the CM-1 and CM-2 are the current champions in that department, with four times as many processors as the runner-up MPP (page :spotref refid=mpp.). .*.sc .*.re pre2col .*.5by7 .*+++.cp The processors in CM-1 are connected in a :f.256% mul %256:ef. grid; in addition, clumps of 16 processors are also interconnected by a packet-switched, 12-dimensional hypercube network for routing messages, and the 16 processors within a clump are linked in daisy chain fashion. Each processor performs a single-bit add in 750 nanoseconds; addition of 32-bit integers has a peak rate close to 2000 (32-bit) MOPS (millions of operations per second). .pi /MOPS :p. There are several ways to view the Connection Machine. One is that it tries to solve the von Neumann bottleneck (page :spotref refid=vnbottl.) by replacing one big processor over there talking to one big memory over here with instead many little processors distributed throughout the memory (in fact, it was originally called the Connection :hp1.Memory:ehp1. :bibref refid=hill81.). A second view is that it is an experiment in pushing the degree of parallelism to the limits allowed by technology. A third view has to do with a model of the brain as a highly parallel collection of relatively slow processes, with information stored as :hp1.connections:ehp1. rather than :q.bits on a shelf:eq. (see :q.Computing with Connections:eq. on page :spotref refid=cputcon.). Thus one would :hp1.expect:ehp1. drastic differences from ILLIAC IV and GF11, but there are some surprising similarities as well; we shall point these out as we go along. :h4.Processing Elements :p. The processing elements are so small that 16 of them fit on a single 1-:f.'cm' sup 2:ef., 68-pin CMOS chip, along with a message router that is connected to the hypercube network and a nanoinstruction decoder that controls the processors and the router (see the top part of :figref refid=cmchip.). This chip is surrounded by four 16-Kbit static RAM chips whose read/write ports are 4 bits wide, so that each processor can have its own private 4 Kbit memory. The processors are almost deceptively simple&emdash.as shown in the bottom part of :figref refid=cmchip page=no., each has a three-input, two-output, bit-serial arithmetic-logic unit (ALU), a little decoding logic, and 16 1-bit flag registers that are used for intermediate results, for personalizing the chip, and for communicating with other chips. In CM-1 terminology :bibref refid=hill85., a microcontroller positioned between the host and the Connection Machine executes a set of :hp1.microinstructions:ehp1. that tell how to translate :hp1.macroinstructions:ehp1. sent by the host into :hp1.nanoinstructions:ehp1. to be executed by each processor on the chip. Unlike microcode in the GF11, these nanoinstructions can be generated on the fly because the processors are so much simpler. :p. To give the reader an idea of the simplicity of the ALU, if it were implemented in logic gates instead of the scheme used in CM-1, it would have about twice as many gates as the 1-bit full adder shown in :figref refid=1add., plus some decode logic. The basic operation performed by the processor is to read 2 bits from locations A and B in memory and 1 bit from flag register F, perform a specified ALU operation on them, and write one of the output bits back to A and the other to the destination flag register Fdest. (The flag output also goes to the flags of neighboring processors via the daisy and grid connections.) The addresses of A, B, F, and Fdest are specified by the nanoinstruction. Examples of ALU operations are .sp .kp on .*--------------- .im (alufns draw) .*--------------- .kp off .sp and variations of these. The inputs can be inverted, and SUBTRACT, for example, is obtained by changing :f.B:ef. to :f.B bar:ef. in ADD. Each processor operation takes three clock cycles, one for each memory access, for a total instruction cycle of 750 nanoseconds. (When the daisy flag is the read flag, the design permits a signal to pass through all 16 processors on a chip in one instruction cycle.) :p. In the implementation of the CM-1 prototype, the ALUs do not really compute their outputs by performing logic operations; rather, they look up the proper outputs in a 16-bit register that contains the two output columns of the truth table corresponding to the ALU operation being performed (since there are three ALU inputs, the truth table has :f.2 sup 3:ef. = 8 rows). Each ALU consists of a decoder and some special gates that give it concurrent access to this latch. In CM-1, the 16 truth table bits are part of the nanoinstruction broadcast to the processors. By reprogramming the microcontroller, the instruction set can be easily changed, as in GF11, and for similar reasons&emdash.the desire to gain more experience before freezing the instruction set. Theoretically, the 16 bits allow :f.2 sup 16:ef. :q.instructions:eq., although most of these would be useless logical combinations of the ALU inputs. After an instruction set is chosen, the nanoinstructions can be shortened by generating the truth table bits on-chip from a shorter opcode, perhaps by using a PLA (programmed logic array). For example, a 5-bit opcode can specify 32 different ALU operations. :p.In addition to the 16 truth table bits specifying ALU operation, the microcontroller sends the following parameters to the processors during each ALU cycle: .sp .5 :ul compact. :li.:hp1.A-:ehp1. and :hp1.B- address:ehp1. (12 bits each) specifying the external memory address from which the ALU's two memory-input bits are read. The memory-output bit is also written into A. :li.:hp1.Read Flag:ehp1. (4 bits) specifying which one of the 16 flag registers is to supply the ALU's flag input bit. :li.:hp1.Write Flag:ehp1. (4 bits) specifying the flag register that is to receive the ALU's flag output. :li.:hp1.Condition Flag:ehp1. (4 bits) specifying which flag to consult for permission to proceed with the operation :spot id=cflag. (like :q.Mother, may I?:eq. in the children's game). :li.:hp1.Condition Sense:ehp1. (1 bit) specifying whether a 1 or a 0 shall mean :q.proceed:eq. in the Condition Flag. :li.:hp1.NEWS Direction:ehp1. (2 bits) specifying whether data are to move to the north, east, west, or south neighbor during this instruction. :eul. .sp .5 This nanoinstruction is relatively wide .pi/instruction set/Connection Machine and does contribute to a longer processor cycle. As :figref refid=cmchip page=no. shows, there are many more chip I/Os than pins, even neglecting the pins needed for control signals and power and ground connection, and so substantial time-sharing of the pins used for data must be done. :p.Much of the processor's capability comes from clever use of the 16 1-bit flag registers. Some of these are general-purpose, with no predefined hardware function, and are used for such things as carry bits in .*a series of ADD operations. The remainder have special hardware-related roles. Several are used for communication via the grid, daisy chain, or router: .sp .5 :ul compact. :li.The read-only :hp1.NEWS Flag:ehp1. receives the flag output of the ALU to the north, east, west, or south, depending on the NEWS parameter sent by the instruction. :li.The read-write :hp1.Router-Data Flag:ehp1. is used to .*enable the ALU flag output to be sent receive data from and send data to the message router via a special on-chip bus. :li.The :hp1.Router-Acknowledge Flag:ehp1. is set by the router when it accepts a message from a processor. :li.The :hp1.Daisy Chain Flag:ehp1. reads the flag output of the preceding processor on the daisy chain connection, and is used to resolve contention among the chip's processors during message routing. :li.The logical NOR of the :hp1.Global Flag:ehp1. of all the processors on a chip is available on one of the chip's pins for communication with the host. :eul. .sp .5 Together, these 16 bits hold the state of the processor, and since any of them can serve as the condition flag, they can be used to give a processor some independence beyond a strict SIMD model, in rather the same way as the 8 local condition-code bits of the GF11 and ILLIAC-IV processors are used. :p. Parts of the memory are also set aside for special information, such as the processor's absolute address. The processors can be sent a pattern to match against anything in their memories and told to set their flags in a certain way if they succeed, after which the selected processors can behave differently from the others even though all receive the same macroinstruction. For example, this technique is used to deliver a message from the router to the desired processor on the chip. Other special areas of memory are a status word that includes a bit indicating when a message is ready to be sent to the router, and another area set aside to receive messages from the router. The router also uses part of the memory to buffer messages going elsewhere. .*+++ .*+++2-28-93: placement of 2-page box that replaces fig id=router: .*+++ .* .*+++2-page box that formerly was 6by9 fig id=router (2-28-93): .fl on page even order .sa prertbx .bx left right .in 2 .ir 2 .boxfont .ce 1 :h4.Processor Action While Routing Messages :spot id=router. .*---------------- .im (router4 draw) .*---------------- The diagram above shows what happens in the CM-1 processing elements while a router message is being sent or received: :ol compact. :li. .*Several processor cycles are spent communicating with the host and .*obtaining clearance for this particular processor to send a message; The Global Flag is used to notify the host that this processor has a message to send, and the Daisy Flag is used to settle any contention with other processors on the chip. If this chip wins, one of its special flags is set and used as the Condition Flag (page :spotref refid=cflag.) during the next 32 instructions; processors in which this flag is not set will :hp1.not:ehp1. execute these instructions. :li.Thirty-two cycles are used to send the message contents to the router. To do this, the processor executes a sequence of SWAP instructions during which B is the address of the message in the memory and Fdest is the Router-Data Flag register. This execution transfers the message data 1 bit at a time from location B in memory to the router. (The router puts the data in another area of the memory that it uses as a message buffer.) All of this sequence is conditional on the processor's flag settings. .sp .ce 1 (continued on next page) .boxfend .bx off .re prertbx .fl off .fl on page odd order .sa prertbx .bx left right .in 2 .ir 2 .boxfont .sp :li.Next, the message address (in absolute coordinates) is read out and :q.fixed:eq., including making it relative to the router in question. This process is accomplished by XORing the router's own address and the message's absolute address, using a series of MOVE instructions. :li.The router sends back an acknowledge bit if it accepted and stored the message. This step completes the SEND cycle. The RECEIVE cycle begins next. :li.The processor sits idle for some cycles while the router prepares to deliver a message. It reads the address of the message in its highest-priority buffer. This message could be one that has been previously delayed by traffic and thus takes precedence over a more recent arrival. If the address is different from the router's, the message is switched onto one of the 12 outgoing router wires and sent out. If the address matches the node's, the router prepares to deliver it locally. :li.The router broadcasts a bit to its local processors that there is an incoming message. :li.The local processors compare their own local address with that of the message. :li.Another 32 cycles of SWAP instructions are used to put the message's contents into the receiving processor's local memory. This step ends the RECEIVE cycle. Since the SWAP instruction can read and write the memory at the same time, the RECEIVE cycle can be overlapped with the next SEND cycle as shown, at the cost of a small gap between steps 7 and 8. The delay between message initiations matches the length of a dimension cycle. :eol. .sp .45 If the message address does not match that of the local node, the router may start transferring the message to another router (start filling the buffer of another router) at step 8 above. .sp .bx off .boxfend .fl off .re prertbx .*+++ END OF THE 2-PAGE BOX ++++++++++++ .*+++ .*page break needed to make LILCUBE, text align (27July88): .*.cp :h4.Routing Element and Network :p. :spot id=sm3cube. :psc proc='aps5w45 3820A' .si LILCUBE width 1.20i depth 0i :epsc. :psc proc=psa. .sb -1.2i .DD $LILCUBE LILCUBE EPSBIN .PO $LILCUBE WIDTH 1.2I SCALE .sb :epsc. .in 1.5i for 1.1i nobreak In addition to quick access to its immediate neighbors, a processor also has slower access to any other processor via a 12-dimensional hypercube network formed by the connections between the routers. How much slower depends on which of two routing mechanisms is used, :hp1.cut-through:ehp1. or :hp1.store-and-forward:ehp1.. As we discuss on page :spotref refid=worm., store-and-forward routing in a hypercube network means that the entire message has to arrive at one node before any of the message is forwarded to the next node. The time to transmit a message is, therefore, the :hp1.product:ehp1. of the message length and the number of cube stages. Cut-through (or :hp1.wormhole:ehp1.) routing means that a message element (bit, byte, or word) is forwarded to the next node right away, that is, without waiting for any trailing pieces of the message (the network is :hp1.pipelined:ehp1.). The message transmission time is therefore the :hp1.sum:ehp1. of the message length and the number of cube stages; thus it is much faster than store-and-forward. :p. The trade-off in the case of the Connection Machine is that the cut-through routing requires bypassing, or :q.short-circuiting:eq., the router elements and making the user's program totally responsible for steering each message to its destination and avoiding collisions with other messages. By contrast, although the router uses store-and-forward and is therefore slower, it handles the routing automatically. We will describe how below. .pi/routing/store and forward .pi/routing/wormhole .pi/store and forward routing .pi/wormhole routing .pi ref /cut-through routing/wormhole routing :p. A router's function is to accept messages from its 16 local processors for transmission to one of the 12 other routers to which it is directly connected; it also receives messages from those other routers and either delivers them to its local processors or else forwards them to other routers. A typical CM-1 message consists of 32 data bits, plus the address information needed to reach the destination, plus a return address, for a total of slightly more than 60 bits. :p.The routing algorithm used is the one described on page :spotref refid=cubrout.. It consists of 12 steps, or :hp1.dimension cycles:ehp1.; during step :f.i:ef. the message is sent to the adjacent node in dimension :f.i:ef. if the :f.i:ef.th bit of its router address is 1. This router address is the address of the destination node :hp1.relative to the originating node:ehp1. and is obtained from the exclusive OR of the two absolute addresses (see page :spotref refid=cubrout.). :spot id=cm1rout. This relative address is used to guide the message to its destination; each time it is sent along some dimension, the corresponding 1 in the address is set to 0. When the address is all 0s, the message has arrived. :fn. Another way to look at this situation is that every processor thinks that it is at the origin. The reader is encouraged to experiment with the simple 3-cube shown on page :spotref refid=sm3cube.. :efn. :p. The complete process of sending and receiving a message via the hypercube network involves a SEND cycle, during which a processor injects a message into its router; one or more petit cycles to reach the target router; and then a RECEIVE cycle, in which the router ejects the message into a local processor. The processor SEND and RECEIVE cycles are synchronized with the router dimension cycle as shown on page :spotref refid=router.. :hp1.At the beginning and end of a dimension cycle, a message (address + data) is completely in either a router buffer or in a processor's private memory:ehp1. During a dimension cycle, a message may move from one router to another (along one of the 12 edges, or dimensions, of CM1's hypercube), or a message may move from a processor's memory to a router's buffer or vice versa. :p. Each message bit takes one processor instruction cycle (750 nS) to transmit, and so a dimension cycle is a little over 60 instruction cycles long. The set of 12 dimension cycles is called a :hp1.petit cycle:ehp1., which is thus about 800 instruction cycles long. This time is the minimum spent by a message going all the way across the router network, assuming it is not delayed by traffic. :fn. A grand cycle is the time for :hp1.all:ehp1. messages generated by some operation to arrive through heavy traffic. :efn. A message moving only from one adjacent router to the next and not running into conflicts with other messages could take as little as 3 dimension cycles or as many as 14, depending on the phase of the petit cycle during which it leaves the processor. Since the algorithm accesses the dimensions in a cyclical fashion, a message that runs into a conflict with another message during a dimension cycle must wait at least one full petit cycle before it gets a second chance to move along that dimension. The average network delay can easily be several petit cycles or several thousand instruction cycles, amounting to several :hp1.milliseconds:ehp1. in CM1. :fn. By contrast, the nonblocking Benes network keeps the GF11 network pipeline delay constant. :efn. Thus, this mechanism should :hp1.not:ehp1. be used for casual one-on-one communication, but rather for parallel set-up of and communication within / among the :hp1.active data structures:ehp1. discussed on page :spotref refid=actdata.. :p. The diagram in the box on page :spotref refid=router. shows what a processor does to send or receive a message via the router. Note that all processors perform these instructions, even though the inputs and outputs of some are inhibited by their flag settings, and so no other processing can be done until all messages are delivered. One could say, therefore, that communication dominates computation in the Connection Machine. Its builders might reply, however, that communication :hp1.is:ehp1. computation for the problems of interest. :p. The mechanics of the router itself :fn. By way of analogy, the router can be thought of as a train station's waiting room, with passengers moving in accord with a dictatorial set of rules. (The whole Connection Machine may owe more inspiration to the MIT Model Railroad Club than is generally recognized.) :efn. are as follows. All messages reaching a router go into buffers unless these are full. The router is continually cycling messages into and out of its buffers. During a dimension cycle, it can accept a message from either a local processor or another router and put it in a buffer, and it can take a message from a buffer and deliver it to a local processor or another router. On dimension cycle :f.i:ef., the router tries to send any stored message with a 1 in bit position :f.i.:ef. Messages leaving these buffers encounter a cross-point-like switch that sends one message onto output wire :f.i:ef. and the rest (if there are any) back to the buffers. The time of one of these round trips is equal to a dimension cycle. A first in, first out (FIFO) discipline is imposed, so that the most recently arrived message is put in the lowest-priority buffer, and the oldest message goes out first. :p.If the buffers are full and multiple messages arrive, one is put in the lowest-priority buffer, and the old contents of this buffer plus the other messages are sent out to other routers even if this process moves them a step further from their destinations. By way of consolation, these involuntarily rerouted messages get an increase of priority. This :hp1.desperation routing:ehp1. .pi/desperation routing .pi/routing/desperation is expected to occur only rarely. However, its effects are not yet fully known. :h4.Programming Environment :p. CM-1 compilers have been written for C, Fortran, and Lisp. Extensions are added to these languages to allow parallel data structures. Programs are described in terms of :hp1.virtual processors:ehp1. to make them independent of the number of hardware processors, which are multiplexed to the extent necessary to support this abstraction. Hillis's book :bibref refid=hill85. describes one such extension called CmLisp. With respect to parallelism, its computational model is similar to that of APL. All concurrent operations involve a generalized, vectorlike data structure called a :hp1.xector:ehp1., each of whose elements is stored in a separate processor. .pi/xector Xector operations are specified .*This can be made to happen by using :q.alpha notation:eq. to denote the kind of :q.apply-to-all:eq. parallelism inherent in APL operations. .pi/alpha For example, the addition of two vectors 1 2 and 2 3 to produce a vector 3 5 is expressed in CmLisp as .sp .ce 1 (:f.alpha +%%:ef. '&lbr.a &rarrow.1 :f.%%:ef.b &rarrow.2&rbr.:f.%%:ef.'&lbr.a&rarrow.2 :f.%%:ef.b &rarrow.3&rbr.) .sp and in APL as .sp .ce 1 1 2 + 2 3 .sp (In the xector enclosed in &lbr. &rbr. brackets, the arrow &rarrow. connects the index and value of each element of the xector. This mapping performed by a xector can involve symbols as well as numbers.) :q.Beta notation:eq. is used to reduce the elements of a xector into a single value, much like the APL reduction operator. .pi/beta For example, the addition of all elements of a vector 1 2 3 to produce the sum 6 is represented in CmLisp as .sp .ce (:f.beta:ef.+ '&lbr.A&rarrow.1 B&rarrow.2 C&rarrow.3&rbr.) .sp and in APL as .sp .ce 1 +/1 2 3 .sp :spot id=cmlisp. Hillis :bibref refid=hill85. describes the use of these constructs in more detail. To quote his summary, :q.A Connection Machine is the direct hardware embodiment of the :f.alpha:ef. and :f.beta:ef. operators. Processors &lbracket.perform&rbracket. :f.alpha,:ef. routers &lbracket.permit&rbracket. :f.beta:ef.. The contents of the memory cells are xectors.:eq. It is interesting to compare the SIMD parallelism of CmLisp and these constructs with the MIMD parallelism of Multilisp and its :q.future:eq. construct (page :spotref refid=mulisp.). .pi/LISP/CMLISP .pi/LISP/parallel :h4.CM-1 Applications :p. One of the primary motivations for the design of the CM-1 Connection Machine :bibref refid=hill85. was the retrieval of common-sense knowledge from a semantic network, a data structure frequently used in artificial intelligence to model the human brain's ability to manipulate relatively unstructured data and quickly extract (i.e., infer) facts that were not explicitly put into the database. A parallel method for doing this modeling using an earlier, simpler system called NETL was described on page :spotref refid=NETL.. This approach is one touched on by Fahlman :bibref refid=fahl80. when he discusses message-passing systems with :q.three flavors of parallelism:eq.; :spot id=3flav. .pi/three flavors of parallelism .pi/parallelism/three flavors of in order of increasing complexity, the three are :hp1.marker passing:ehp1. .pi/marker passing (such as NETL's 1-bit markers), .pi/value passing :hp1.value passing:ehp1. (a kind of multiple-bit marker passing), and true :hp1.message passing:ehp1. .pi/message passing (as supplied by the Connection Machine). Markers and values are passed among nodes and perhaps combined at a node to form a sum, minimum, or maximum, but they are not kept separate, as true messages are. The processors in the Connection Machine are considerably more powerful than NETL's in that they transmit and maintain individual messages of (more or less) arbitrary length and perform arbitrary logical and arithmetic operations on them. The router can :q.soft-wire:eq. two arbitrary processors together, since it can deliver a message between any two processors and since the received message includes the sending processor's return address. Hillis :bibref refid=hill85. describes how :q.active data structures:eq. such as semantic networks are :spot id=actdata. built up out of trees of processors and how Lisp's CONS operation is related to the building up of such networks. :p. However, the early demonstrations of the Connection Machine have not been on semantic network problems, but on such applications as document retrieval and fluid flow. These demonstrations have indeed used a processor per data item, but most have not used the router network, relying instead on only the grid connections. One performance example cited :bibref refid=stan86. is the retrieval of a news story matching some key words from among 50,000 news stories and the subsequent retrieval of all related stories in a time as low as a few milliseconds. Considerable work has been done by the Connection Machine people on :q.thinking in parallel:eq. and using parallel algorithms, but the computational model used so far has not used the most unique feature of the Connection Machine, namely the routers, and so many of the results are applicable to other massively parallel machines made up of large arrays of small processors, such as the DAP and the MPP (see page :spotref refid=mpp.). It is not yet clear what the Connection Machine's most important application will be. :h4.Comparisons :p. Other examples of massive parallelism via many small processors in SIMD mode have been configured as meshes or trees. The mesh-connected MPP and DAP are treated on page :spotref refid=mpp.ff. Tree machines are discussed on page :spotref refid=treemch.. Trees are nice data structures, but as hardware structures they pose a problem: you want to be able to put the root at any PE, which you can't do with tree machines; you can with shuffle and cube networks. :p. Another example of a cube-connected, message-passing architecture is provided by the Cosmic Cube (page :spotref refid=cosmic.), but the processors there are much more substantial and fewer in number, and the operation is MIMD. .*2nd edition copyedit done by allan at NYU. :H4. CM-2 :spot id=cm2. :p. When introducing the CM-2 successor, Thinking Machines shifted their emphasis from symbolic to numeric processing; a trend that has continued with their subsequent offerings, including the CM-5 MIMD machine mentioned on page :spotref refid=cm5.. The major architectural change from the CM-1, is the addition of a Weitek-based floating-point coprocessor with every 32 of the of 1-bit integer processors. This ensemble of 2048 floating-point units is able to achieve several GFLOPS (billions of floating-point operations per second) .pi /GFLOPS on favorable problems well suited for SIMD operation. Not surprisingly, such high floating-point performance has attracted considerable attention in the numerical computing community, and the Connection Machine is now considered more a number cruncher than a symbolic processor. .*added by Almasi 3-30-92: The model 200, introduced in 1991, operates the 1-bit processors and also the Weitek chips at 10 MHZ, thus obtaining 10 MFLOPS (64-bit) from each Weitek chip and a peak of 20 GFLOPS from a full-scale CM-2. :p. Early versions of the software stored a floating-point variable in the memory associated with a single bit-serial processor, and hence 32 data transfers were needed to send a single- precision (32-bit) value to the Weitek. :fn. During these 32 cycles, each of the 32 processors associated with a Weitek can transfer a value. There is a buffer present to hold these 32 values as they are (simultaneously) arriving and then deliver them sequentially to the Weitek. Hence 32 cycles after they arrive in the buffer, they have all been sent to the processor. :efn. Newer :hp1.slice-wise:ehp1. .pi /slice-wise :spot id=slicew. software spreads each single-precision value across the 32 processors so that it can be transmitted all at once. :p. When using the slice-wise model, the CM-2 is often viewed as a parallel computer containing 2048 32-bit processors. That is, the machine has become a collection of Weiteks with an integer support unit (that happens to contain 32 1-bit processors) rather than as a collection of 1-bit processors with Weiteks added to boost floating-point performance. It is interesting to note that a software change was the last step needed for the architecture to change appearance. One should also recognize that the slice-wise CM-2 has proven to be quite an effective performer on numerical problems. The 1989 Gordon Bell award for highest performance was won by a team using a CM-2, as was the 1990 Gordon Bell award for highest compiler-generated speedup (an impressive 1900 out of a maximum possible 2048). :p. Other improvements include a vast increase in memory from 32 MB in the CM-1 to 8 GB (8 billion bytes) in the CM-2, support for parallel I/O to a Data Vault disk system, and an integrated color frame buffer for visualization. More details can be found in :bibref refid=tuck88. and in :bibref refid=tmi90.. :spot id=cm1end. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Shyam.Mudambi@ecrc.de (Shyam Mudambi) Subject: Parallel Debugger for Sun Multiprocessors running Solaris Keywords: Parallel Debugging, Solaris Sender: news@ecrc.de Reply-To: Shyam.Mudambi@ecrc.de Organization: European Computer-Industry Research Centre GmbH. Date: Tue, 23 Nov 1993 13:54:06 GMT Apparently-To: comp-parallel@uunet.uu.net We have ordered a Sun SparcStation 10 with 4 processors for developing parallel applications. I would like to know if there exist any parallel debuggers for Solaris (from Sun) and if so whether they also have some functionality for debugging threads. Please send any answers directly to mudambi@ecrc.de. Thanks! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bal@cs.vu.nl (Henri Bal) Subject: Position for parallel programming researcher in Amsterdam Message-ID: Sender: news@cs.vu.nl Organization: Fac. Wiskunde & Informatica, VU, Amsterdam Date: Tue, 23 Nov 1993 15:45:05 GMT The Dept. of Mathematics and Computer Science of the Vrije Universiteit in Amsterdam, The Netherlands, has a research group working on parallel programming of distributed systems. The group consists of eight people (researchers, programmers and students) and is headed by Dr. Henri Bal. The research extends the work on the Orca programming language, which uses a form of object-based distributed shared memory [see IEEE Trans. on Softw. Eng., March 1992 and IEEE Computer Aug. 1992]. It makes use of the Amoeba distributed operating system [see CACM December 1990], being developed here under the leadership of Prof. Andrew Tanenbaum. The Orca and Amoeba groups work closely together. Some of the current research topics of the Orca group are: - Scalable and portable runtime systems for Orca - Advanced compile-time and run-time optimizations for shared objects - Parallel applications using shared objects - Object models - Higher-level (application-oriented) languages - Tools (e.g. for performance evaluation) A position is available for a Postdoc researcher for 1-2 years (preferably two years) to do work that fits in with the current research of the group. Candidates are expected to have (or soon have) a PhD in Computer Science and must have contributed significantly to the fields of parallel or distributed computing, as shown by publications in important journals and conferences. Experience with distributed shared memory, runtime systems, compilers, or performance evaluation tools for parallel languages is an advantage, since our work is experimental in nature. The department has an experimental Amoeba system consisting of 80 SPARCs and 48 MC68030's. The group uses this system for its research. The group is partially funded by a substantial Pionier research grant from the Netherlands organization for scientific research (N.W.O.). For more information: Dr. Henri Bal Vrije Universiteit Dept. of Mathematics and Computer Science De Boelelaan 1081a 1081 HV Amsterdam The Netherlands Email: bal@cs.vu.nl Phone: +31 20 5485574 (secretary is: +31 20 5487273) Fax: +31 20 6427705 Applicants should send the following items (preferably by email in PostScript, troff -ms, or LaTex format): 1. Curriculum vitae (name, address, degrees with school, date, and major, work experience, etc.) 2. List of publications, patents, awards, and similar items. 3. The name, FAX number, and email address of three professional references. 4. A statement describing the kind of research you would like to do here, and how this relates to your background. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: richter@phoenix.informatik.uni-stuttgart.de (Jan-Peter Richter) Subject: MIMD code with loadimbalances wanted Sender: richter@phoenix.informatik.uni-stuttgart.de (Jan-Peter Richter) Reply-To: richter@informatik.uni-stuttgart.de Organization: IPVR Stuttgart University Date: Tue, 23 Nov 1993 17:12:16 GMT Apparently-To: hypercube@hubcap.clemson.edu ********** MIMD CODE WITH LOADIMBALANCES WANTED ********* For our research in the field of dynamic loadbalancing we are looking for programs that show dynamic load imbalances. The goal of our project is to enhance performance of message passing MIMD programs by dynamic reconfiguration of processes and dynamic assignment of task messages to server processes. If you have a program that we could use as a sample application, please let us know! Your program should have the following properties: ABSTRACT PROGRAMMING MODEL: Your program must be written for MIMD architectures with explicit message passing. SIMD, vector or (virtual) shared memory code is of no use for us. It is not neccessary that any standard interface like pvm is used as long as the general message passing semantics is easy to understand and thus easily ported. Our machine is an intel PARAGON with the NX message passing library. Your code may be written for workstation clusters or supercomputers. SPMD and server/client parallelized algorithms are welcome! SCALABILITY Your program must be parallelized in a way that the number of processors can easily be adjusted (as a parameter) at the moment of program start. This adjustment should be possible on the range up to ~50 processors or beyond. (It doesn't matter if the speed up breaks down on large numbers of processors as long as the program works at all.) CODE AVAILABILITY Your code must be availabe to us in full source. If any copyright restrictions apply to the program we will not give the code away to any third party. Licence fees however cannot be payed. LANGUAGE Your code should be written in C (preferred) and/or FORTRAN with additional message passing primitives. A well documented program would be nice. We would ask for your help with not so well documented programs. Anyway we would be grateful for your cooperation on the subject of your program. SUBJECT Your program should deal with real world problems like fluid dynamics or raytracing (as a visualization technique). Calculating the mandelbrot set is *not* a real world problem. BEHAVIOUR Your program should show the behaviour of load imbalances. That is (in a nutshell) that some processors are idle/waiting for messages (new data to process) while others are busy and have not reached the point of synchronization (sending the data). We are especially interested in programs where load imbalances occur dynamically. I.e. that during run time in one phase of the execution one process is lightly loaded and in a later phase heavily loaded. (This effect may occur in adaptive mesh or grid oriented algorithms.) The total run time of your program should last from minutes to hours (depending on the number of processors and input data). If you have such a program you can help us with our research! Please contact Jan-Peter Richter (richter@informatik.uni-stuttgart.de) Thanks, Jan-Peter Richter Universitaet Stuttgart, IPVR Breitwiesenstrasse 20-22 70565 Stuttgart Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: laird@Think.COM (Laird Popkin) Subject: looking for a Travelling Salesman code for CM Date: 23 Nov 1993 19:01:12 GMT Organization: CM Network Services Sender: laird@marble.com Nntp-Posting-Host: cmns.think.com Summary: flying too much... Keywords: CM2, CM5, Travelling Salesman, flying A friend of mine posted the following in a local newsgroup, and I thought that perhaps someone here could give some pointers to travelling salesman codes, perhaps for the CM2 or CM5 since that's what we can get to most easily. I'd guess that solving the problem of a best path between all the airports in all the states in order to hit every state is probably beyond smaller machines that we can get to. Please reply either to me (laird@marble.com) or the pilot who posted this (marc@marble.com), as I don't read this newsgroup too frequently. Thanks! ---------------------------------------------------------------------- Fellow pilots: This Spring, Tiron and I are going to attempt to set a world record in our Aerostar. The existing record for our class aircraft to land in all 48 contiguous United States is 7 days, 4 hours, 32 minutes; this record was set, not surprisingly, in an Aerostar. FYI, the record for landing in the capital of each of the 48 states is 13 days and change. The reason we decided to attack this record is simple: the Aerostar is made for record-busting. It's extremely fast, yet it doesn't have jet engines, so we don't burn more fuel at low altitude than at high. Also, it's considerably cheaper to attempt this kind of record in a recip. than in a jet-powered aircraft. Here's where you come in. While not jet-expensive to fly, the Aerostar does cost us about $250/hour, and there are a lot of hours in 48 states. Therefore, it would be extremely upsetting to attempt this flight and to fail. I figure that we (at Marble, along with our friends on the Internet) have more computing power available to us than the previous record setters. Tiron has already gotten a tentative offer of CM-5 time to help us flight plan. So, in short, I need to solve the travelling salesman problem, with the wind and weather factor thrown in for good measure. How's that for a sports challenge? While asking for a solution might be a bit much, it would probably suffice to develop a piece of software that would attempt to enumerate various flight plans, accounting for typical winds and weather, for our fuel capacity and air speeds, and then computer the expected times of those flight plans. While we probably cannot guarantee the "best" solution, we should definitely be able to come up with a number of "better" solutions. Perhaps such a piece of software already exists... I'm looking for good ideas, good advice, and good flight plans. Anyone want to help (if not, I'll have to attempt to solve this one weekend...)? Marc Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: davec@ecst.csuchico.edu (Dave Childs) Subject: concurrent langauges for teaching Date: 23 Nov 1993 21:31:45 GMT Organization: California State University, Chico Nntp-Posting-Host: hairball.ecst.csuchico.edu I am trying to track down a good concurrent language for teaching concurrent concepts. Something based on C, C++, or pascal would work nicely, but we are open to other possibilities. Thanks for your help, David Childs, California State University Chico Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: davec@ecst.csuchico.edu (Dave Childs) Subject: concurrent langauges for teaching Date: 23 Nov 1993 21:31:45 GMT Organization: California State University, Chico Nntp-Posting-Host: hairball.ecst.csuchico.edu I am trying to track down a good concurrent language for teaching concurrent concepts. Something based on C, C++, or pascal would work nicely, but we are open to other possibilities. Thanks for your help, David Childs, California State University Chico Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gbyrd@mcnc.org (Gregory T. Byrd) Subject: CM-5 memory architecture Keywords: CM-5 Sender: daemon@mcnc.org (David Daemon) Nntp-Posting-Host: robin.mcnc.org Reply-To: gbyrd@mcnc.org (Gregory T. Byrd) Organization: North Carolina Supercomputing Center Date: Tue, 23 Nov 1993 21:49:59 GMT Apparently-To: comp-parallel In the CM-5 Technical Summary (TMC, Oct. 1991), it says, "The memory controller is replaced by four vector units." This is in comparing the node with the vector option to the "basic" node. The diagram for the vector node shows the vector units in between the MBUS and the memory units, and the text indicates that the vector units "perform all the functions of a memory controller." However, in the November '93 CACM article, the block diagram for the node shows a connection from the memories to the MBUS, as well as the connections to the vector units. Which diagram is correct? ...Greg Byrd MCNC / Information Technologies Division gbyrd@mcnc.org 3021 Cornwallis Road / P.O. Box 12889 (919)248-1439 Research Triangle Park, NC 27709-2889 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gbyrd@mcnc.org (Gregory T. Byrd) Subject: CM-5 memory architecture Keywords: CM-5 Sender: daemon@mcnc.org (David Daemon) Nntp-Posting-Host: robin.mcnc.org Reply-To: gbyrd@mcnc.org (Gregory T. Byrd) Organization: North Carolina Supercomputing Center Date: Tue, 23 Nov 1993 21:49:59 GMT Apparently-To: comp-parallel In the CM-5 Technical Summary (TMC, Oct. 1991), it says, "The memory controller is replaced by four vector units." This is in comparing the node with the vector option to the "basic" node. The diagram for the vector node shows the vector units in between the MBUS and the memory units, and the text indicates that the vector units "perform all the functions of a memory controller." However, in the November '93 CACM article, the block diagram for the node shows a connection from the memories to the MBUS, as well as the connections to the vector units. Which diagram is correct? ...Greg Byrd MCNC / Information Technologies Division gbyrd@mcnc.org 3021 Cornwallis Road / P.O. Box 12889 (919)248-1439 Research Triangle Park, NC 27709-2889 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Cristina Boeres Subject: contents of the ACM workshop on debugging Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh Date: Wed, 24 Nov 1993 11:34:27 GMT Apparently-To: comp-parallel@uknet.ac.uk -- I would be very pleased to know the contents of the Proceedings ACM workshop on debugging - 1993 - California.. thanks in advance cristina # Cristina Boeres --- Depart. of Computer Science - University of Edinburgh # # JCMB - King's Buildings - Mayfield Road # # o__ Edinburgh Scotland EH9 3JZ - Phone #: (031) 650 5141 # Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,sci.math.num-analysis From: clark@bruce.nist.gov Subject: NIST SBIR request Sender: news@dove.nist.gov Organization: NIST Date: Tue, 23 Nov 1993 23:19:07 GMT Apparently-To: comp-parallel@uunet.uu.net I posted this earlier this month, but as the deadline has been extended I thought some of you might wish to look at it again. >From the Department of Commerce Program Solicitation, Small Business Innovative Research for FY 1994, p.37: "8.7.5 SUBTOPIC: Schroedinger Equation Algorithms for MIMD Architectures NIST programs in laser-atom interaction utilize models that require the numerical solution of a many-particle, time-dependent Schroedinger equation. We are interested in algorithms for the solution of this equation that can take advantage of computational parallelism, particularly multiple-instruction, multiple-data (MIMD) architectures. We require a set of computational modules that can solve the initial- value problem for the Schroedinger equation on a multidimensional spatial grid. Such modules should be written in the Fortran or C languages, and use PVM for interprocessor communication so that they can be executed on a heterogeneous network of computers. They should provide standard interfaces for visualization, e.g. calls to AVS, PV-WAVE, or Display PostScript. Preference will be given to proposals that optimize fast Fourier transform techniques for MIMD architectures, or which also provide for the solution of large eigenvalue problems." Further information on the SBIR program may be obtained from Mr. Norman Taylor A343 Physics Building National Institute of Standards and Technology Gaithersburg, MD 20899 (301)975-4517 The deadline for receipt of proposals is January 18, 1994. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: isclyhb@leonis.nus.sg (Benjamin Lian) Subject: Bulk Synchronous Model? Date: 24 Nov 1993 04:33:32 GMT Organization: National University of Singapore Nntp-Posting-Host: leonis.nus.sg X-Newsreader: TIN [version 1.2 PL0] I am looking for papers on Valiant's Bulk Synchronous Model for parallel computing. Would greatly appreciate pointers in the right direction to start a literature search. Thanking you, -- Benjamin Lian isclyhb@leonis.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: louk@teleride.on.ca (Lou Kates) Subject: Kendall Square Research address Organization: Teleride Sage Ltd. Date: Wed, 24 Nov 1993 00:01:41 -0500 Does anyone have contact info for Kendall Square Research? Lou Kates, louk@teleride.on.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: isclyhb@leonis.nus.sg (Benjamin Lian) Subject: Concurrent access to data structures Date: 24 Nov 1993 08:00:24 GMT Organization: National University of Singapore Nntp-Posting-Host: leonis.nus.sg X-Newsreader: TIN [version 1.2 PL0] I am trying to see how one might provide concurrent access to dynamic linear and branching structures in a concurrent programming language. I can think of a couple of instances where concurrent access to parts of a tree might be handy, but really how *useful* such a facility might be is not clear to me. Syntactic specification does not seem to be a real problem, except that there might be the supplementary question of how much impact there might be from the language's synchronization and communication model. The big question is probably that of how to provide access control reasonably cheaply at runtime. Excluding all possibility of concurrent, but serialised, read/write access to the data structure may not be reasonable. [That's why the KSR memory architecture is so attractive to me, but I don't have one because we can't afford one.] I'm sorry if I sound somewhat obtuse, but I'm still in the process of crystalising my thoughts, and may either miss the `obvious' or be asking an FAQ. Cheers, -- Benjamin Lian isclyhb@leonis.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fredrikm@eik.ii.uib.no (Fredrik Manne) Subject: Partitioning of loops Date: 24 Nov 1993 09:27:13 GMT Organization: Institute of Informatics, University of Bergen, Norway Nntp-Posting-Host: eik.ii.uib.no Sender: usenet@uib.no I have been working with symmetric multi processing (SMP) lately and was wondering about the algorithms that are used for partitioning and scheduling of loops. The ones I have seen seems rather simple. My question relates to what one does when the iterations take different amount of time. Does anyone have any references to more sofiticated algorithms for loop partitioning that can be used in such cases? Fredrik Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Damal Arvind Subject: Final CFP: ACM/IEEE/SCS 8th Workshop on Par. and Dist. Simulation ============================================= Final Call For Papers ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation University of Edinburgh, Scotland, U.K. July 6-8 1994, ============================================= Sponsors: ACM Special Interest Group on Simulation (SIGSIM), IEEE Computer Society Technical Committee on Simulation (IEEE-TCSIM), and Society for Computer Simulation (SCS) Topics: PADS provides a forum for presenting recent results in the simulation of large and complex systems by exploiting concurrency. The scope of the conference includes, but is not limited to: * Algorithms, and methods for concurrent simulation (e.g. optimistic, conservative, discrete, continuous, event-driven, oblivious) * Programming paradigms for concurrent simulation (e.g. object-oriented, logic, functional) * Models of concurrent simulation (e.g. stochastic, process algebraic, temporal logic) * Performance evaluation (both theoretical and experimental) of concurrent simulation systems * Special purpose concurrent simulation (e.g. multiprocessor architectures, distributed systems, telecommunication networks, VLSI circuits, cache simulations) * Relationship of concurrent simulation and underlying architecture (e.g. SIMD and MIMD machines, geographically distributed computers, tightly-coupled multiprocessors) Schedule: Deadline for Paper submission : December 1, 1993 Notification of acceptance : March 1, 1994 Camera ready copy due by : April 15, 1994. Invited Speaker : LEONARD KLEINROCK (Los Angeles, USA) General Chair : Rajive Bagrodia (Los Angeles, USA) Local Arrangements: Monika Lekuse (Edinburgh, U.K.) Program Co-chairs D. K. Arvind Jason Yi-Bing Lin Department of Computer Science, Bellcore, University of Edinburgh, MRE 2D-297 Mayfield Road, 445 South Street Edinburgh EH9 3JZ, U.K. Morristown, NJ 07962, USA. dka@dcs.ed.ac.uk liny@thumper.bellcore.com Voice: +44 31 650 5176 Voice: +1 (201) 829-5095 Fax: +44 31 667 7209 Fax: +1 (201) 829-5886 Program Committee I. Akyildiz (Atlanta, USA) A. Greenberg (Bell Laboratory, USA) R. Ayani (Kista, Sweden) P. Heidelberger (IBM, USA) F. Baiardi (Pisa, Italy) C. Lengauer (Passau, Germany) M. Bailey* (Tucson, USA) D. Nicol* (Williamsburg, USA) S. Balsamo (Pisa, Italy) T. Ott (Bellcore, USA) H. Bauer (Munich, Germany) B. Preiss (Waterloo, Canada) R. Fujimoto* (Atlanta, USA) S. Turner (Exeter, UK) * Member of the Steering Committee\\ Send e-mail to D. K. Arvind (dka@dcs.ed.ac.uk) for inclusion in the PADS electronic mailing list. Submissions: Prospective authors should submit six copies of the paper written in English and not exceeding 5000 words to either one of the Program Co-chairs. Papers must be original and not submitted for publication elsewhere. Each submission should include the following in a cover sheet: short abstract, contact person for correspondence, postal and e-mail addresses. To ensure blind reviewing, authors' names and affiliations should appear only on the cover sheet. Bibliographic references should be modified so as not to compromise the authors' identity. Papers submitted by electronic mail will not be considered. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: swnet.jobs,comp.parallel,comp.arch From: lisper@sics.se (Bj|rn Lisper) Subject: Research Assistant position at KTH, Stockholm Sender: news@sics.se Organization: Swedish Institute of Computer Science, Kista Date: Wed, 24 Nov 1993 12:30:58 GMT Apparently-To: comp-parallel@sunet.se Research assistant position in Parallel Computer Systems at the department of Teleinformatics, Royal Institute of Technology, Stockholm (KTH). A PhD degree not older than 5 years is required. The position is limited to four years. Deadline for applications: Dec. 9, 1993. For more information contact: Prof. Lars-Erik Thorelli Email: le@it.kth.se Teleinformatics/Computer Systems Phone: +46 8 7521351 KTH-Electrum/204 Fax: +46 8 7511793 S-164 40 Kista Sweden Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: A bit of humor for your holiday enjoyment.... Subj: WHY ASK WHY Why do you need a driver's license to buy liquor when you can't drink and drive? Why isn't phonetic spelled the way it sounds? Why are there interstate highways in Hawaii? Why are there flotation devices under plane seats instead of parachutes? Why are cigarettes sold in gas stations when smoking is prohibited there? Do you need a silencer if you are going to shoot a mime? Have you ever imagined a world with no hypothetical situations? How does the guy who drives the snowplow get to work in the mornings? If 7-11 is open 24 hours a day, 365 days a year, why are there locks on the doors? If a cow laughed, would milk come out her nose? If nothing ever sticks to TEFLON, how do they make TEFLON stick to the pan? If you tied buttered toast to the back of a cat and dropped it from a height, what would happen? If you're in a vehicle going the speed of light, what happens when you turn on the headlights? You know how most packages say "Open here". What is the protocol if the package says, "Open somewhere else"? Why do they put Braille dots on the keypad of the drive-up ATM? Why do we drive on parkways and park on driveways? Why is brassiere singular and panties plural? Why is it that when you transport something by car, it's called a shipment, but when you transport something by ship, it's called cargo? You know that little indestructible black box that is used on planes, why can't they make the whole plane out of the same substance? Why is it that when you're driving and looking for an address, you turn down the volume on the radio? Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.arch From: ahusain@wotangate.sc.ti.com (Adil Husain) Subject: What is a good multicomputer disk block request distribution? Sender: usenet@csc.ti.com Nntp-Posting-Host: 192.91.134.114 Reply-To: ahusain@wotangate.sc.ti.com Organization: Texas Instruments Inc. Date: Wed, 24 Nov 1993 14:48:01 GMT Apparently-To: comp-parallel@uunet.uu.net Hi, I'm in the process of adding compute-node *file* caching support to a multicomputer simulator. I have all the data structures in place to handle actual I/O traces, but I don't have actual I/O traces, so I need to somehow generate a meaningful stream of I/O requests using random methods. I do have what I think is a fairly good model of multicomputer I/O request types and sizes. To satisfy the file caching mechanism, though, in which I'm keeping actual cache state, I need to come up with a good disk block request distribution. My references say use uniform random. What is the consensus? Thanks for any help, Adil --- ----------------------------------------------------------------------------- Adil Husain \ ahusain@wotangate.sc.ti.com \ adilh@pine.ece.utexas.edu TI Prism Group \ W:7132744061 & H:7135302576 \ I do not speak for TI. ----------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Mark Sawyer Subject: Re: Bulk Synchronous Model? Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre References: <1993Nov24.134657.5258@hubcap.clemson.edu> Date: Wed, 24 Nov 1993 15:21:32 GMT Apparently-To: comp-parallel@uknet.ac.uk In article <1993Nov24.134657.5258@hubcap.clemson.edu>, isclyhb@leonis.nus.sg (Benjamin Lian) writes: > I am looking for papers on Valiant's Bulk Synchronous Model > for parallel computing. Would greatly appreciate pointers in > the right direction to start a literature search. Try: Valiant, L.G. A Bridging Model for Parallel Computation, Communications of the ACM, August 1990, Vol. 33 No.8. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ss7540@csc.albany.edu (SHUKLA SANDEEP) Subject: Re: concurrent langauges for teaching Sender: news@csc.albany.edu (News Administrator) Organization: State University of New York at Albany References: <1993Nov24.133229.2463@hubcap.clemson.edu> Date: 24 Nov 93 12:42:40 Apparently-To: comp-parallel@cis.ohio-state.edu In article <1993Nov24.133229.2463@hubcap.clemson.edu> davec@ecst.csuchico.edu (Dave Childs) writes: >I am trying to track down a good concurrent language for teaching concurrent >concepts. Something based on C, C++, or pascal would work nicely, but we >are open to other possibilities. You might like to try Distributed C freely available from the Universitat Muenchen and also another good system is PCN( It has more documentations and books) freely available from the Argonne National laboratory and Caltech. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mapdjb@midge.bath.ac.uk (D J Batey) Subject: Re: Bulk Synchronous Model? Message-ID: Organization: University of Bath, UK References: <1993Nov24.134657.5258@hubcap.clemson.edu> Date: Wed, 24 Nov 1993 18:09:54 GMT Someone asked for some pointers to Valiants Bulk Synchronous Model; this is the only reference I have on that subject: L.G.Valiant, "Bulk Synchronous Parallel Computers", Technical Report TR-08-89, Computer Science, Harvard University Duncan Batey, University of Bath, UK, djb@uk.ac.bath.maths Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Wed, 24 Nov 93 11:31:38 PST From: David Levine Subject: Re: A bit of humor for your holiday enjoyment.... In-Reply-To: <1993Nov24.134922.5930@hubcap.clemson.edu> Organization: Supercomputer Systems Division (SSD), Intel Cc: > How does the guy who drives the snowplow > get to work in the mornings? A VW Beetle, of course! ;-) - David D. Levine, Intel Supercomputer Systems Division == davidl@ssd.intel.com "Fahrvergnugen." Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ostr@acs2.bu.edu (boris ostrovsky) Subject: Re: Kendall Square Research address Date: 24 Nov 1993 19:38:26 GMT Organization: Boston University, Boston, MA, USA References: <1993Nov24.134708.5347@hubcap.clemson.edu> Nntp-Posting-Host: acs2.bu.edu Originator: ostr@acs2.bu.edu In article <1993Nov24.134708.5347@hubcap.clemson.edu>, louk@teleride.on.ca (Lou Kates) writes: |> Does anyone have contact info for Kendall Square Research? KSR's address is: Kendall Square Research 170 Tracer Lane Waltham, MA 02154-1379 tel (617)-895-9400 fax:(617)-890-7506 Boris Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: icw@ecs.soton.ac.uk (I C Wolton) Subject: RAPS benchmarking workshop Date: 24 Nov 93 16:12:55 GMT Organization: Electronics and Computer Science, University of Southampton Nntp-Posting-Host: felinfoel.ecs.soton.ac.uk RAPS Open Workshop on Parallel Benchmarks and Programming Models Chilworth Manor Conference Centre, Southampton, UK 7-8 Dec 1993 Workshop Overview ----------------- This workshop will review recent developments in programing models for parallel applications, outline the features of some of the RAPS parallel benchmarks and present some complementary international initiatives. The final session will give the vendors an opportunity to present the results of the RAPS benchmarks on their latest machines. The RAPS Consortium ------------------- The RAPS consortium was put together to promote the creation of benchmarks for important production applications on massively-parallel computers (RAPS stands for Real Applications on Parallel Systems). As part of this activity it has a strong interest in adopting a programming model that can provide portability without excessive sacrifice of performance. The consortium consists of a number of users and developers of significant large production codes running on supercomputers. It is supported by a Consultative Forum of computer manufacturers which currently includes ACRI, Convex, Cray, Fujitsu, IBM, Intel and Meiko. Codes being worked on for the RAPS benchmark suite include: PAM-CRASH - a finite element code mainly used for car crash simulations IFS/ARPEGE - a global atmospheric simulation code used for meteorology and climatology FIRE - a fluid flow code used for automotive flow simulations GEANT - used by CERN to simulate the interaction of high-energy particle showers with detectors Provisional Programme --------------------- The workshop will be held over two days, starting after lunch on Tuesday 7th December and finishing at lunchtime on Wednesday 8th December. Lunch will be available on both days. Tuesday 7 Dec, Afternoon Current status of RAPS Karl Solchenbach, PALLAS ESPRIT Application porting activities Adrian Colebrook, Smith, Guildford Impact of Cache on Data Organisation Richard Reuter, IBM Heidelberg High Performance Fortran Compiler Techniques and their Evaluation on some RAPS Benchmark Codes Thomas Brandes, GMD MPI - A Standard Message Passing Interface Ian Glendinning, University of Southampton Workshop Dinner Wed 8 Dec, Morning The PEPS Benchmarking Methodology Ed Brocklehurst, National Physical Laboratory The PARKBENCH Initiative Tony Hey, University of Southampton The IFS spectral model: the 3D version with some preliminary results David Dent, ECMWF Vendor's Presentation of Results for the RAPS Benchmarks Registration Details -------------------- The registration fee is 120 pounds sterling, including lunch and refreshments. An optional workshop dinner is being arranged at 25 pounds per head. Accomodation is available at Chilworth Manor for 54.50 pounds per night. Cheques for registration should be made payable to "University of Southampton" Payment for accomodation and the dinner should be made direct to Chilworth Manor on the day. Bookings and enquiries to: Chris Collier Electronics & Computer Science Highfield University of Southampton Southampton S09 5NH Tel: +44 703 592069 Fax: +44 703 593045 Email: cdc@ecs.soton.ac.uk This form should be returned to the conference organiser, Chris Collier. Name ....................................................... Organisation ............................................... Address .................................................... .................................................... Telephone .................................................. Email ...................................................... Special Dietary Requirements ................................ ............................................................. Registration (Price in Pounds Sterling) : 120.00 I would like accomodation for the nights of ............................................................. I would like to attend the workshop dinner .................. Yes/No Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hains@iro.umontreal.ca (Gaetan Hains) Subject: Rencontres francophones du parallelisme Sender: news@iro.umontreal.ca Organization: Universite de Montreal, Canada Date: Wed, 24 Nov 1993 19:58:17 GMT Apparently-To: uunet!comp-parallel RenPar'6 6eme Rencontres francophones du parallelisme ENS Lyon, France, 6--10 juin 1994 Premiere annonce et appel aux communications Apres les Rencontres sur le parallelisme organisees a Saint-Malo, Toulouse, Marseille, Lille en mars 1992 et Brest en mai 1993, RenPar'6 est organise cette annee par le Laboratoire de l'informatique du parallelisme (LIP) de l'ENS Lyon. Comme pour les rencontres precedentes, le but est de rassembler les chercheurs interesses par les divers aspects du parallelisme et de favoriser les echanges, en invitant plus specialement les jeunes chercheurs. Forts du succes des editions precedentes, nous souhaitons cette annee nous ouvrir largement sur le monde francophone : Afrique noire francophone, Afrique du nord, Belgique, Quebec, Suisse... Les rencontres se tiendront dans le Grand amphitheatre de l'ENS Lyon les 8, 9 et 10 juin. Elles seront precedees par deux journees de tutoriels, demonstrations et formations sur des sujets interessant notre communaute. Themes abordes Environnements de programmation Langages et compilation Architectures generales et specialisees Ordonnancement et placement Modeles et complexite Reseaux et communications Algorithmique et systemes repartis Evaluation de performances Mise en uvre du parallelisme Comite de programme Luc Bouge, president (LIP, ENS Lyon, France) Michel Cosnard, vice-president (LIP, ENS Lyon, France) Pierre Fraigniaud, vice-president (LIP, ENS Lyon, France) El Mostafa Daoudi (U. Mohamed 1, Oujda, Maroc) Jean-Luc Dekeyser (LIFL, Lille, France) Michel Diaz (LAAS, Toulouse, France) Daniel Etiemble (LRI, Orsay, France) Jean-Marie Filloque (LIBr, Brest, France) Claude Girault (MASI-IBP, Paris, France) Claude Jard (IRISA, Rennes, France) Gaetan Hains (DIRO, U. Montreal, Quebec) Gaetan Libert (Fac. Polytechnique, Mons, Belgique) Zaher Mahjoub (U. Tunis 2, Tunisie) Dominique Mery (CRIN, Nancy, France) Elie Milgrom (UCL, Louvain-la-Neuve, Belgique) Philippe Mussi (INRIA, Sophia-Antipolis, France) Guy-Rene Perrin (LIB, Besancon, France) Patrice Quinton (IRISA, Rennes, France) Jean Roman (LaBRI, Bordeaux, France) Andre Schiper (EPFL, Lausanne, Suisse) Maurice Tchuente (U. Yaounde, Cameroun) Denis Trystram (IMAG, Grenoble, France) Calendrier et contact Prendre contact avec le secretariat : des maintenant si necessaire ! Envoyer votre soumission : 15 mars 1994 Notification d'acceptation : 25 avril 1994 Envoi de la version revisee : 10 mai 1994 Conference RenPar'6 a l'ENS Lyon : 6--10 juin 1994 Selection des communications pour TSI : 15 juin 1994 Version etendue attendues : 1 septembre 1994 Valerie Roger, Secretaire de RenPar'6 LIP, ENS Lyon, 46 allee d'Italie, F-69364 Lyon cedex 07, France Telephone : (+33) 72.72.80.37, Telecopie : (+33) 72.72.80.80, Internet : renpar6@lip.ens-lyon.fr Soumission des communications Les personnes desirant effectuer une communication sont invitees a soumettre un texte de 4 pages A4 avant le 15 mars 1994. La notification sera envoyee aux auteurs pour le 25 avril. La version revisee (de 4 pages au plus) de leur texte devra etre remise le 10 mai pour inclusion dans les actes. Une copie des actes des rencontres sera distribuee sur place a chaque participant. Selection pour un numero special TSI Pendant les rencontres, le Comite de programme fera une selection des meilleures communications dont les auteurs sont (exclusivement) de jeunes chercheurs (etudiants en fin de these ou docteur depuis moins de 3 ans). Les criteres retenus seront le contenu scientifique, la qualite du texte de presentation et la qualite de l'expose. Les auteurs seront invites a soumettre une version etendue de leur communication dans le cadre d'un numero special de la revue francophone RAIRO Technique et Science Informatiques editee par Hermes/AFCET. La selection sera confirmee le 15 juin, et les articles etendus seront dus le 1er septembre pour parution debut 1995. Demonstrations, formations et rencontres de travail Les personnes ou groupes souhaitant proposer une activite de demonstration ou de formation sur un produit de recherche, sont invites a soumettre une proposition avant le 15 mars. Prendre contact pour toute information sur les possibilites techniques offertes par l'ENS Lyon : acces aux machines massivement paralleles, salles de stations de travail, salles de cours equipees, etc. Informations generales Les frais de participation aux rencontres seront tenus au niveau le plus bas possible en ce qui concerne les etudiants et membres des universites et EPST pour permettre la participation la plus large possible. Les repas de midi seront compris. L'hebergement sera a la charge des participants, et une information concernant les diverses possibilites sera transmise lors de l'inscription. -- Gae'tan Hains De'partement d'informatique et de recherche ope'rationnelle Universite' de Montre'al, C.P.6128 succursale A, Montre'al, Que'bec H3C 3J7 Tel. +1 514 343-5747 | Fax. +1 514 343-5834 | hains@iro.umontreal.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mbl900@anusf.anu.edu.au (Mathew BM LIM) Subject: Re: Supercomputer Sale to China Date: 25 Nov 1993 01:56:54 GMT Organization: Australian National University References: <1993Nov22.135728.4241@hubcap.clemson.edu> <1993Nov24.132514.29916@hubcap.clemson.edu> Nntp-Posting-Host: 150.203.5.2 In <1993Nov24.132514.29916@hubcap.clemson.edu> coker@cherrypit.princeton.edu (David A. Coker) writes: >I believe the computer being sold to China is a Cray. The article below is from HPCWire : Clinton Approves Cray Research Supercomputer Deal with China Nov 18 NEWS BRIEFS HPCwire ============================================================================= Seattle, Wash. -- The United States has agreed to sell a sophisticated $10- million Cray Research YMP-2 supercomputer to China in an apparent goodwill gesture on the eve of a summit between the countries' presidents, The New York Times and Reuters reported last week. The deal was disclosed before last Friday's meeting between President Clinton and Chinese President Jiang Zemin, the highest-level contact between the two countries since the 1989 Tiananmen Square crackdown, Reuters said. The move was part of a White House strategy to embrace rather than isolate China despite disagreements over human rights, weapons proliferation, and trade, the New York Times reported, citing senior administration officials. Clinton, who had accused the Bush administration of "coddling" China, said before the meeting that he was not softening the U.S. position toward Beijing. "Our policy is to try to engage China but to be very firm with the human rights issues, to be very firm on the weapons proliferation issues," he said. "But there are 1.2 billion people in China and we don't believe we can achieve our objectives within the context of complete isolation." Even more significant for American business, the administration has also decided to lift the ban on important components for China's nuclear power plants like generators, the newspaper said, citing Commerce Department officials. The opening of China's nuclear market to American companies is expected to bring billions of dollars of sales to General Electric Co., a huge manufacturer of nuclear plants and equipment, the newspaper said. The Cray Research system involved in the sale was described as "a relatively small mainline system, with two processors and a peak performance of 958 megaFLOPS," by Steve Conway, a spokesman for Cray. Although Conway reported that the company had not yet received direct confirmation of the sale's approval from the White House, U.S. officials told Reuters that the deal was virtually complete. The decision to approve the sale stemmed from a request made by the Chinese Meteorological Administration for equipment "for the purpose of monitoring and predicting weather patterns," one official told Reuters. The sale was allowed despite what U.S. officials have described as clear evidence that China has exported M-11 missile components to Pakistan in violation of an international missile control accord. "We agreed to put in place a very tight monitoring regime to ensure that it is only used for that purpose, and having done that we approved the sale," the official said. "This sale has nothing to do with sanctions. The sanctions we have in place do not focus on this sort of thing." The official said the sale was in the United States' interest because the international weather monitoring system uses Cray supercomputers like the one involved in the deal with China. "This gives us needed coverage in that area," he said. In informing Chinese Foreign Minister Qian Qichen Wednesday, U.S. Secretary of State Warren Christopher did not ask for any concessions from Peking, the New York Times said. Another official said the Chinese had already been informed of the administration's approval of the sale but that it would be formally announced in connection with the Clinton-Jiang meeting at the Asia-Pacific Economic Cooperation forum, Reuters reported. The United States' ban on military sales to China, slapped on following the army's crackdown on pro-democracy demonstrators in Tiananmmen Square, still remains in place. In Washington D.C, acting State Department spokeswoman Christine Shelly said two more hurdles must be cleared before the Cray YNP-2 could actually be delivered to China, according to UPI. These hurdles are expected to only be a formality. The Supercomputer Control Regime -- made up of Japan and the United States, the only two nations that manufacture the massive data processors -- must endorse the sale, she said. She said the transaction must also be approved by the Coordinating Committee for Multilateral Export Controls, a panel that was set up during the Cold War by 17 western nations to prevent the sale to the East Bloc of technology that can be used for military purposes. The Commerce Department, which issues licenses for such sales, has already approved it, she said. ***************************************************************************** S E L E C T N E W S S P O N S O R S Product specifications and company information in this section are available to both subscribers and non-subscribers. 901) ANS 902) IBM Corp. 904) Intel SSD 905) Maximum Strategy 906) nCUBE 907) Digital Equipment 908) Hewlett-Packard 909) Fujitsu America 910) Convex Computer 912) Avalon Computer 914) Applied Parallel Res. 915) Genias Software 916) MasPar Computer 919) Transtech Parallel 921) Cray Research Inc. ***************************************************************************** Copyright 1993 HPCwire. To receive the weekly HPC Select News Bulletin at no charge, send e-mail to "trial@hpcwire.ans.net". -- Mathew Lim, Unix Systems Programmer, ANU Supercomputer Facility, Australian National University, Canberra, ACT, Australia 0200. Telephone : +61 6 249 2750 | Fax : +61 6 247 3425 | E-Mail : M.Lim@anu.edu.au Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: philippe@ens.ens-lyon.fr (Jean-Laurent Philippe) Subject: Seeking info about LSF Date: 26 Nov 1993 08:33:52 GMT Organization: Ecole Normale Superieure de Lyon, France Hi, netters, Could anyone provide information about LSF ? How does it compare to PVM ? I will summarize to the net if I get enough information. Jean-Laurent PHILIPPE LHPC Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: farhat@argos.enst.fr (Jocelyne Farhat) Subject: Requesting performance of thread systems Date: 26 Nov 1993 10:20:12 GMT Organization: Telecom Paris, France Sender: farhat@argos.enst.fr (Jocelyne Farhat) Nntp-Posting-Host: argos.enst.fr Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit I am looking for performance measurements of different thread systems (for example those of the systems Mach, Chorus and Solaris, the Mach C Threads and SUN LWP libraries, and the POSIX standard,etc...) to complete my bibliographical work on the subject. Does anyone know or have any that have been already done? Merci beaucoup. Jocelyne. P.S. Please send replies to farhat@inf.enst.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: vgupta@research.nj.nec.com (Vipul Gupta) Subject: Communication graphs in 'real' applications Originator: vgupta@iris48 Sender: vgupta@research.nj.nec.com (Vipul Gupta) Organization: NEC Research Institute Date: Sat, 27 Nov 93 00:43:06 GMT Apparently-To: uunet!comp-parallel I am currently investigating the potential benefits of using reconfigurable multi-computer networks. I am using the task interaction graph model for parallel applications -- vertices in the graph represent parallel processes and edges represent the need for interprocess communication. I am looking for examples of such graphs for some 'real-life' applications. Unstructured graphs e.g. those arising in finite element applications are of particular interest. I would really appreciate it if somebody can provide me with access to such graphs. If you think you have something that might serve my purpose, but are not sure, I would still appreciate hearing from you. I can be contacted via e-mail at vgupta@research.nj.nec.com or vgupta@yoko.rutgers.edu. Thank you for your time, Vipul Gupta Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tony@aurora.cs.msstate.edu (Tony Skjellum) Subject: MPI Library Paper Available Date: 27 Nov 93 09:49:53 GMT Organization: Mississippi State University Nntp-Posting-Host: aurora.cs.msstate.edu Summary: A paper on writing libraries with MPI is available now by ftp Keywords: MPI, Parallel Libraries Following up on my promise at the SC'93 MPI minisymposium, the paper "Writing Libraries in MPI" is now available on anonymous ftp. site: aurora.cs.msstate.edu directory: pub/reports file: mpi_lib_27nov93.ps.Z This will appear in the Proceedings of the Scalable Libraries Conference, to be published by IEEE Press. -Tony Skjellum -- . . . . . . . . . "There is no lifeguard at the gene pool." - C. H. Baldwin - - - Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: vip@kvant.munic.msk.su (Andrei I. Massalovitch) Subject: COCOM is dead !! We need NN Boards, Supers, Info etc. Date: Sun, 28 Nov 1993 11:00:55 GMT X-Mailer: BML [MS/DOS Beauty Mail v.1.25] Reply-To: vip@kvant.munic.msk.su Organization: NII Kvant Keywords: NN boards Sender: news-server@kremvax.demos.su Summary: COCOM is dead !! We need NN Boards, Supers, etc. Message-ID: Dear Sir/Madam, I'm Andrei Massalovitch, vice-president of Parallel Systems Division of Scientific and Research Institute "Kvant". Our division is a young, dynamic scientific team. For the last years we are making intensive research work in the field of parallel supercomputing, transputer systems and neural networks. We hope that some of our results will have both scientific and commercial interest. Now we extremely need any information about special and general purpose Neural Network PCBoards and ready to buy some of them immediately. We also would like to take this opportunity to introduce ourselves. Parallel Systems Division is the team of more than 200 high-experience hardware and software engineers. The business and operations of our Division fall into five main directions. 1. Using transputer-based computers and neural networks to solve the problems of image & signal processing. 2. Development of MIMD and SIMD parallel VLSI-based supercomputing systems with 10-100 Gflops performance, intelligent workstations with 1 GIPS performance for signal/image processing and various modeling problems . This job includes application of high-performance micro- processors (e.g. i860) and transputers. 3. Development of large distributed computing/information processing complexes. Such complexes can consist of general purpose computers, problem-oriented systems and communication hardware. IBM PC/compatible PC and workstations are used as terminals. Computing resources integration is realized with common network hardware. 3. Integrated solutions of data protection problems in computer systems and networks. 5. Developing of new models of a neuron and a neural network on the basis of the latest results obtained in Russian and world neurobiology (This is the first attempt of creation of the BioNeurocomputer in our country). Unfortunately, during a number of years most of this work was conducted in "special" laboratories and we couldn't publish obtained results. Now situation has changed. Most of former bans were removed. It is very interesting for us to run the "information blockade" and to engage in all types of cooperation with the representatives of scientific and industrial circles of your country. Thanks in advance. Yours faithfully, Andrei Massalovitch /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ < > Vice-President > Address : Dr. A.Massalovitch < Parallel Systems Division < P.O.Box 430 > S&R Institute "Kvant" > 121019 Moscow < Moscow, Russia < Russia > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ ____________________________________________________________________ E-mail : vip@kvant.munic.msk.su Fax : (095) 153-9584 ____________________________________________________________________ -- Andrei Massalovitch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ukeller@pallas-gmbh.de (Udo Keller) Subject: European MPI Workshop - First Announcement Organization: PALLAS GmbH First Announcement E U R O P E A N M P I W O R K S H O P MPI, the new standard for message-passing programming has been presented at the Supercomputer'93 in Portland recently. The MPI (Message-Passing Interface) standard has been defined by the transatlantic MPI Committee. The European particpation in the MPI Committee was funded by the ESPRIT project PPPE. The MPI Committee is now waiting for public comments until February 11. European software developers are using message-passing systems for many years. With their long experience in programming message-passing parallel computers, European software developers should be actively involved in the final round of the MPI definition. It is the aim of the European MPI Workshop to organize the dissemination of MPI in Europe and to collect the European developers' view on MPI. Date: January 17/18 1994 Location: INRIA Sophia Antipolis (near Nice) Organized by: PALLAS, GMD, INRIA (on behalf of the PPPE project) Registration fee: 70 ECU or 450 FF (to be paid cash at registration Who should attend: European software developers with experience in parallel computing, preferably message passing. Participants from universities, research organizations, and industry are welcome. The maximum number of participants is 80. Agenda: January 17 afternoon: Presentation of the MPI message passing standard January 18 morning: Feedback of European software developers on MPI After the workshop the MPI Committee will have its European meeting. If you want to participate or need more information, please contact PALLAS mpi-ws@pallas-gmbh.de You will receive the MPI standard document. Details on speakers, transport, hotels etc. will be sent out later. -- ---------------------------------------------------------------------------- Udo Keller phone : +49-2232-1896-0 PALLAS GmbH fax : +49-2232-1896-29 Hermuelheimer Str.10 direct line: +49-2232-1896-15 D-50321 Bruehl email : ukeller@pallas-gmbh.de ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: ukeller@pallas-gmbh.de (Udo Keller) Newsgroups: comp.sys.super,comp.parallel Subject: CEC Evaluation on Technical Applications available Organization: PALLAS GmbH Technical Applications for High Performance Computers New Evaluation Report from PALLAS GmbH, Germany The lack of commercially available application software still is the main obstacle for the wide use of parallel computers. In particular, 3rd party software packages used by engineers in research and industry have not been ported to (massively) parallel computer systems. On behalf of the Commission of the European Communities, PALLAS has evaluated the needs of the European industrial users concerning the introduction of MPP systems and application software. A listing of the most important 3rd party application codes for technical simulation has been generated. Main criterion for the ranking was the need of the users to get these codes ported to parallel systems. This evaluation study is available from PALLAS GmbH, Hermuelheimer Str. 10, D-50321 Bruehl, Germany, at a price of DM 150 not including VAT (phone: +49-2232-1896-0, email: info@pallas-gmbh.de). PALLAS has long-year experience in high performance development and implementation of software for workstation clusters, parallel systems and supercomputers. PALLAS develops and markets the PARMACS interface and analysis tools for parallel programming. -- ---------------------------------------------------------------------------- Udo Keller phone : +49-2232-1896-0 PALLAS GmbH fax : +49-2232-1896-29 Hermuelheimer Str.10 direct line: +49-2232-1896-15 D-50321 Bruehl email : ukeller@pallas-gmbh.de ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ihzec@westminster.ac.uk (ihzec) Subject: Available, FREE graduate for 14 week project in US. Organization: University of Westminster X-Newsreader: TIN [version 1.2 PL2] Date: Mon, 29 Nov 1993 16:45:33 GMT Apparently-To: comp-parallel@uk.ac.uknet Hi there! I am currently studying for a Masters Degree in Parallel and Distributed Systems. The final phase, requires I do 14 weeks industrial placement in a related field, producing an original piece of work. I would like a placement in the US if possible. If you would like more details about my interests, course, abilities or would like to discuss a project, please e-mail me. If you are unable to help presently, but know of someone who can, would you be so kind as to pass this message on? Cheers Phil | Phil LoCascio + _____ ____ | | Centre of Parallel + /____/ /____/ / / | | Distributed Computing + / / / __/__ /_____ | | email: ihzec@uk.ac.westminster + If life's a big party, | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: farhat@argos.enst.fr (Jocelyne Farhat) Subject: Requesting performance of thread systems Organization: Telecom Paris, France Sender: farhat@argos.enst.fr (Jocelyne Farhat) Nntp-Posting-Host: argos.enst.fr Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit I am looking for performance measurements of different thread systems (for example those of the systems Mach, Chorus and Solaris, the Mach C Threads and SUN LWP libraries, and the POSIX standard,etc...) to complete my bibliographical work on the subject. Does anyone know or have any that have been already done? Merci beaucoup. Jocelyne. P.S. Please send replies to farhat@inf.enst.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: npoch@uni-paderborn.de (Norbert Poch) Subject: __shared on the KSR1 ? Date: 29 Nov 1993 17:51:18 GMT Organization: Uni-GH Paderborn, Germany Nntp-Posting-Host: sunwing.uni-paderborn.de -- Hi everyone, I'm currently working on a KSR1-32 and C / C++. There are some questions regarding the usage of __shared directive I'd be very greatfull if someone could help me. I've seen from the c-tutorial files that global variables that are to be accessed from different pthreads are declared as __shared. Take for example the following program. I define a global array and access its data-elements in parallel from two pthreads. The behaviour of the pthreads's exactly the same for each declaration of the array (__shared, __private and default). What's the difference between these types of declarations? How can I declare a variable in the global scope and place it in "local" memory? Thanks to all those KSR1 folks out there! Norbert Poch _______________________________________________________________________________ Norbert Poch npoch@dat.uni-paderborn.de Fachgebiet Datentechnik Uni-GH Paderborn (Germany) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - #include #include /* define three arrays a, b and c */ __shared int *a; /* one as __shared */ int *b; /* one normally declared */ __private int *c; /* and the other one as __private */ pthread_t a1,a2,b1,b2,c1,c2; /* and define two precesses to handle each array */ void init_data(int *d, int val) { /* initialize array elements */ } void data_out(int *d) { /* output array elements */ } int main() { /* create a mutex variable for printing on screen */ pthread_mutex_init(&out,pthread_mutexattr_default); /* allocate memory for both arrays */ a=(int*)malloc(10*sizeof(int)); b=(int*)malloc(10*sizeof(int)); c=(int*)malloc(10*sizeof(int)); /* seta all values of arrays to a predefined value */ init_data(a,128); init_data(b,255); init_data(c,64); /* check if values have been set correctly */ data_out(a); data_out(b); data_out(c); /* create two pthreads to output data of array a */ pthread_create(&a1,pthread_attr_default, data_out,(void *)a); pthread_create(&a2,pthread_attr_default, data_out,(void *)a); /* create two pthreads to output data of array b */ pthread_create(&b1,pthread_attr_default, data_out,(void *)b); pthread_create(&b2,pthread_attr_default, data_out,(void *)b); /* create two pthreads to output data of array c */ pthread_create(&c1,pthread_attr_default, data_out,(void *)c); pthread_create(&c2,pthread_attr_default, data_out,(void *)c); /* wait for all pthreads to terminate */ pthread_join(a1,return_ptr); pthread_join(a2,return_ptr); pthread_join(b1,return_ptr); pthread_join(b2,return_ptr); pthread_join(c1,return_ptr); pthread_join(c2,return_ptr); } Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 29 Nov 1993 12:47:51 EST From: Pulin Subject: fifth neural network conference proceedings... The Proceedings of the Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University at Fort Wayne, held April 9-11, 1992 are now available. They can be ordered ($9 + $1 U.S. mail cost; make checks payable to IPFW) from: Secretary, Department of Physics FAX: (219)481-6880 Voice: (219)481-6306 OR 481-6157 Indiana University Purdue University Fort Wayne email: proceedings@ipfwcvax.bitnet Fort Wayne, IN 46805-1499 The following papers are included in the Proceedings of the Fifth Conference: Tutorials Phil Best, Miami University, Processing of Spatial Information in the Brain William Frederick, Indiana-Purdue University, Introduction to Fuzzy Logic Helmut Heller and K. Schulten, University of Illinois, Parallel Distributed Computing for Molecular Dynamics: Simulation of Large Hetrogenous Systems on a Systolic Ring of Transputer Krzysztof J. Cios, University Of Toledo, An Algorithm Which Self-Generates Neural Network Architecture - Summary of Tutorial Biological and Cooperative Phenomena Optimization Ljubomir T. Citkusev & Ljubomir J. Buturovic, Boston University, Non- Derivative Network for Early Vision M.B. Khatri & P.G. Madhavan, Indiana-Purdue University, Indianapolis, ANN Simulation of the Place Cell Phenomenon Using Cue Size Ratio J. Wu, M. Penna, P.G. Madhavan, & L. Zheng, Purdue University at Indianapolis, Cognitive Map Building and Navigation J. Wu, C. Zhu, Michael A. Penna & S. Ochs, Purdue University at Indianapolis, Using the NADEL to Solve the Correspondence Problem Arun Jagota, SUNY-Buffalo, On the Computational Complexity of Analyzing a Hopfield-Clique Network Network Analysis M.R. Banan & K.D. Hjelmstad, University of Illinois at Urbana-Champaign, A Supervised Training Environment Based on Local Adaptation, Fuzzyness, and Simulation Pranab K. Das II & W.C. Schieve, University of Texas at Austin, Memory in Small Hopfield Neural Networks: Fixed Points, Limit Cycles and Chaos Arun Maskara & Andrew Noetzel, Polytechnic University, Forced Learning in Simple Recurrent Neural Networks Samir I. Sayegh, Indiana-Purdue University, Neural Networks Sequential vs Cumulative Update: An * Expansion D.A. Brown, P.L.N. Murthy, & L. Berke, The College of Wooster, Self- Adaptation in Backpropagation Networks Through Variable Decomposition and Output Set Decomposition Sandip Sen, University of Michigan, Noise Sensitivity in a Simple Classifier System Xin Wang, University of Southern California, Complex Dynamics of Discrete- Time Neural Networks Zhenni Wang and Christine di Massimo, University of Newcastle, A Procedure for Determining the Canonical Structure of Multilayer Feedforward Neural Networks Srikanth Radhakrishnan and C, Koutsougeras, Tulane University, Pattern Classification Using the Hybrid Coulomb Energy Network Applications K.D. Hooks, A. Malkani, & L. C. Rabelo, Ohio University, Application of Artificial Neural Networks in Quality Control Charts B.E. Stephens & P.G. Madhavan, Purdue University at Indianapolis, Simple Nonlinear Curve Fitting Using the Artificial Neural Network Nasser Ansari & Janusz A. Starzyk, Ohio University, Distance Field Approach to Handwritten Character Recognition Thomas L. Hemminger & Yoh-Han Pao, Case Western Reserve University, A Real-Time Neural-Net Computing Approach to the Detection and Classification of Underwater Acoustic Transients Seibert L. Murphy & Samir I. Sayegh, Indiana-Purdue University, Analysis of the Classification Performance of a Back Propagation Neural Network Designed for Acoustic Screening S. Keyvan, L. C. Rabelo, & A. Malkani, Ohio University, Nuclear Diagnostic Monitoring System Using Adaptive Resonance Theory Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sims@gehrig.ucr.edu (david sims) Subject: seeking ARGUS-related research Date: 29 Nov 1993 19:44:16 GMT Organization: University of California, Riverside (College of Engineering/Computer Science) I'm trying to track down references to research related to ARGUS. I have Liskov and Scheifler's 1983 TOPLAS paper, "Guardians and Actions: Linguistic Support for Robust, Distributed Programs". Would somone send me references to ARGUS-related work performed since 1983? many thanks, dave sims -- David L. Sims Department of Computer Science sims@cs.ucr.edu University of California +1 (909) 787-6437 Riverside CA 92521-0304 PGP encryption key available on request. USA Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Path: aoi!nakasima From: nakasima@kuis.kyoto-u.ac.jp (Hiroshi Nakashima) Subject: Re: CFP: 8th ACM International Conference on Supercomputing (ICS'94) In-Reply-To: cdp@sp95.csrd.uiuc.edu's message of Mon, 29 Nov 1993 19:18:41 GMT Sender: news@kuis.kyoto-u.ac.jp Nntp-Posting-Host: aoi.kuis.kyoto-u.ac.jp Reply-To: nakasima@kuis.kyoto-u.ac.jp Organization: Dept. of Info. Sci., Kyoto Univ., JAPAN References: <1993Nov29.191841.23054@sparky.sterling.com> Date: Tue, 30 Nov 1993 01:25:38 GMT Apparently-To: comp-parallel@uunet.uu.net In article <1993Nov29.191841.23054@sparky.sterling.com> cdp@sp95.csrd.uiuc.edu (Constantine Polychronopoulos) writes: > 8th ACM International Conference on Supercomputing > ================================================== > > Manchester, England, July 11-15, 1994 > ===================================== > : > : > > Architecture -- Professor Shinji Tomita, Department of Information > Science, Kyoto University, Yoshida-hon-machi, Kyoto, Japan, T606. > tomita@kuis.kyoto-u.ac.jp. +81-75-753-5393. We would like to ask you authors to send your papers on architectural topics NOT TO THE ADDRESS ABOVE, BUT TO: ics94arc@kuis.kyoto-u.ac.jp so that we can pick up submission mails easily. We also ask you to attach information for contact to PS manuscript, preferably in the enclosed format. Best Regards, Prof. Shinji Tomita Prof. Hiroshi Nakashima Vice Program Chair (Architecture) Dept. of Information Science Dept. of Information Science Kyoto University Kyoto University --- % Fill the following and attach it to PS manuscript, and send them % together to ics94arc@kuis.kyoto-u.ac.jp % \title{} % title of the paper \authors{}{} % author's name and affiliation % % if two or more authors, duplicate this entry like; % % \authors{1st author's name}{1st author's affi.} % % \authors{2nd author's name}{1st author's affi.} % % : % % \authors{n-th author's name}{n-th author's affi.} \name{} % name of the person for further contact \zip{} % zip code and/or country name \address{} % surface mail address \organization{} % organization name \section{} % section name \tel{} % phone number \fax{} % facsimile number \email{} % e-mail address % % The following is an example % \title{The Architecture of a Massively Parallel Computer} % \authors{Shinji Tomita}{Kyoto Univ.} % \authors{Hiroshi Nakashima}{Kyoto Univ.} % \name{Shinji Tomita} % \zip{606-01, JAPAN} % \address{Yoshida Hon-Machi, Sakyou-Ku, Kyoto} % \organization{Kyoto University} % \section{Dept. of Information Science} % \tel{+81-75-753-5373} % \fax{+81-75-753-5379} % \email{tomita@kuis.kyoto-u.ac.jp} Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: german@cs.columbia.edu (German Goldszmidt) Subject: Re: concurrent langauges for teaching Date: 30 Nov 1993 03:47:18 GMT Organization: Computer Science Department Columbia University References: <1993Nov24.133229.2463@hubcap.clemson.edu> <1993Nov29.155309.18274@hubcap.clemson.edu> In article <1993Nov29.155309.18274@hubcap.clemson.edu>, ss7540@csc.albany.edu (SHUKLA SANDEEP) writes: > In article <1993Nov24.133229.2463@hubcap.clemson.edu> davec@ecst.csuchico.edu (Dave Childs) writes: > >I am trying to track down a good concurrent language for teaching concurrent > >concepts. Something based on C, C++, or pascal would work nicely, but we > >are open to other possibilities. You might also like to try Concert/C, which is C based and is freely available from IBM research, via anonymous ftp from software.watson.ibm.com. It was tried in a course here at Columbia, and also at NYU. -- |German S. Goldszmidt | |Computer Science Dept. tel: +1 212 939 7099 | |Columbia University fax: +1 212 666 0140 | |530 West 120th Street, #626 | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: dwilmot@crl.com (Dick Wilmot) Newsgroups: comp.parallel,comp.arch Subject: Re: What is a good multicomputer disk block request distribution? Date: 29 Nov 1993 22:38:29 -0800 Organization: CRL Dialup Internet Access References: <1993Nov29.155254.18115@hubcap.clemson.edu> ahusain@wotangate.sc.ti.com (Adil Husain) writes: >Hi, >I'm in the process of adding compute-node *file* caching support to a >multicomputer simulator. I have all the data structures in place to >handle actual I/O traces, but I don't have actual I/O traces, so I >need to somehow generate a meaningful stream of I/O requests using >random methods. I do have what I think is a fairly good model of >multicomputer I/O request types and sizes. To satisfy the file >caching mechanism, though, in which I'm keeping actual cache state, I >need to come up with a good disk block request distribution. My >references say use uniform random. What is the consensus? >Thanks for any help, >Adil >--- >----------------------------------------------------------------------------- >Adil Husain \ ahusain@wotangate.sc.ti.com \ adilh@pine.ece.utexas.edu >TI Prism Group \ W:7132744061 & H:7135302576 \ I do not speak for TI. >----------------------------------------------------------------------------- I studied file access patterns in a serious way a few years back. Studied mainframes (IBM and compatibles) and PCs. Found that the average byte in a file system is hardly ever accessed. A few, small files account for a large portion of data accesses. The most active 1% of OPENED files for any given week accounted for 40 - 60% of all I/O activity. There were many files that weren't even opened in a week's time so the activity was actually even more skewed than these numbers. The only extremely active files were quite small and often highly shared by many users, many processes and several computers (MVS operating system allows disk sharing across computer systems). These fileslived in data centers supporting large, active database systems. Large files were never highly active on a per byte basis. These results were reported in the proceedings of the Dec. 1989 Computer Measurement Group conference. There have also been a number of other studies. Osterhout at Berkeley and someone at Rochester. A recent paper by BYU researchers in IEEE Trans. on Knowledge & Data Engineering also found the hottest 1% of disk space accounting for 50% of I/O activity. The same pattern seems to reappear on all different kinds of systems. This was actually predicted by a Harvard professor in the mid 1940s. His name is Zipf and they call it Zipf's law. He studied these patterns in many different kinds of systems. English language word usage comes to mind where he studied Joyce's _Ulysses_ and found that they most common 28 words acccounted for some large percentage of all words counted. I heard that a study at a supercomputer site found that 30% of the files were write-only (never actually read -- about as cold as you can get). If you want realistic I/O patterns then accesses will be highly skewed toward a few favorite locations and then spreadign out over the rest of the file system. Negative exponential as I recall. -- Dick Wilmot Editor, Independent RAID Report (510) 938-7425 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Zahid.Hussain@brunel.ac.uk (Zahid Hussain) Subject: SIMD/MSIMD for 3D graphics Message-ID: Organization: Brunel University, Uxbridge, UK Date: Tue, 30 Nov 1993 09:03:22 GMT Does anyone have any reference as to how SIMD/MSIMD parallel processors are being used for 3D graphics? In which directions are the research being directed? Many thanks in anticipation. **Zahid. Dr Zahid Hussain Research Fellow, Dept of Electrical Engineering Brunel University E-mail: Zahid.Hussain@brunel.ac.uk Uxbridge, Middlesex UB8 3PH Tel: +44 (0)895 274000 x2227 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hsingh@seas.gwu.edu (harmel singh) Subject: Re: Supercomputer Sale to China Date: 30 Nov 1993 12:29:51 GMT Organization: George Washington University References: <1993Nov22.135728.4241@hubcap.clemson.edu> <1993Nov24.132514.29916@hubcap.clemson.edu> <1993Nov29.155422.18819@hubcap.clemson.edu> Mathew BM LIM (mbl900@anusf.anu.edu.au) wrote: => In <1993Nov24.132514.29916@hubcap.clemson.edu> coker@cherrypit.princeton.edu (David A. Coker) writes: => >I believe the computer being sold to China is a Cray. => The article below is from HPCWire : => Clinton Approves Cray Research Supercomputer Deal with China Nov 18 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ but wasn't Cray trying to sell a supercomputer to india a couple of years back which of course fell through because indians could not convince the US that they would be using it for pure weather forecasting :) and yes they(indians) went ahead to make there own Hypercube. there was an interesting article on this "bungling of multi million dollar deal" in the washington post this spring. apparently the policymakers are improvising! lets see ... Megabucks Vs. Human rights ...hard decision indeed :) hks. PS: i don't think this article belongs here... but hey, i am just stating the facts. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: grabner@gup.uni-linz.ac.at (Siegfried Grabner,GUP-Linz) Reply-To: grabner@gup.uni-linz.ac.at Organization: GUP Linz, University Linz, AUSTRIA Subject: CONPAR 94 / VAPP VI - second announcement Keywords: CONPAR 94 - VAPP VI Johannes Kepler University of Linz, Austria September 6-8, 1994 Second Announcement and Call For Papers The past decade has seen the emergence of two highly successful series of CONPAR and of VAPP conferences on the subject of parallel processing. The Vector and Parallel Processors in Computational Sciene meetings were held in Chester (VAPP I, 1981), Oxford (VAPP II, 1984), and Liverpool (VAPP III, 1987). The Inter- national Conferences on Parallel Processing took place in Erlangen (CONPAR 81), Aachen (CONPAR 86) and Manchester (CONPAR 88). In 1990 the two series joined together and the CONPAR 90 - VAPP IV conference was organized in Zurich. CONPAR 92 - VAPP V took place in Lyon, France. The next event in the series, CONPAR 94 - VAPP VI, will be organized in 1994 at the University of Linz (Austria) from September 6 to 8, 1994. The format of the joint meeting will follow the pattern set by its predecessors. It is intended to review hardware and architecture developments together with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms and applications software on vector and parallel architec- tures. It is expected that the program will cover ========================================== * languages / software tools * automatic parallelization and mapping * hardware / architecture * performance analysis * algorithms * applications * models / semantics * paradigms for concurrency * testing and debugging * portability A special session will be organized on Parallel Symbolic Computation. The proceedings of the CONPAR 94 - VAPP VI conference are intended to be published in the Lecture Notes in Computer Science series by Springer Verlag. This conference is organized by GUP-Linz in cooperation with RISC-Linz, ACPC and IFSR. Support is provided by GI-PARS, OCG, OGI, IFIP WG10.3. Schedule ======== Submission of complete papers Feb 15 1994 Notification of acceptance May 1 1994 Final (camera-ready) version of accepted papers July 1 1994 Paper Submittance ================= Contributors are invited to send five copies of a full paper not exceeding 15 double-spaced pages in English to the program committee chairman at: Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Phone: +43 7236 3231 41, Fax: +43 7236 3231 30 Email: conpar94@risc.uni-linz.ac.at The title page should contain a 100 word abstract and five specific keywords. CONPAR/VAPP also accepts and explicitly encourages submission by electronic mail to conpar94@risc.uni-linz.ac.at. Submitted files must be either * in uuencoded (preferably compressed) DVI format or * in uuencoded (preferably compressed) PostScript format as created on most Unix systems by cat paper.ps | compress | uuencode paper.ps.Z > paper.uue Invited Speakers ================ Ian Foster (Argonne National Laboratory) Kai Hwang (Stanford University) Monica Lam (Stanford University) Organizing Committee ==================== Conference Chairman: Prof. Jens Volkert Honorary Chairman: Prof. Wolfgang Handler Program Chairman: Prof. Bruno Buchberger Members: Siegfried Grabner, Wolfgang Schreiner Conference Address: University of Linz, Dept. of Computer Graphics and Parallel Processing (GUP-Linz) Altenbergerstr. 69, A-4040 Linz, Austria Tel.: +43-732-2468-887 (884), Fax.: +43-732-2468-10 (c/o Siegfried Grabner) Email: conpar94@gup.uni-linz.ac.at Program Committee ================= Chairman: Bruno Buchberger (Austria) Makoto Amamiya (Japan), Francoise Andre (France), Marco Annaratone (USA), Pramod C.P. Bhatt (India), Dario Bini (Italy), Arndt Bode (Germany), Kiril Boyanov (Bulgaria), Helmar Burkhart (Switzerland), Michel Cosnard (France), Frank Dehne (Canada), Mike Delves (UK), Ed F. Deprettere (The Netherlands), Jack Dongarra (USA), Iain Duff (UK), Klaus Ecker (Germany), John P. ffitch (UK), Rolf Fiebrich (USA), Ian Foster (USA), Geoffrey Fox (USA), Christian Fraboul (France), Wolfgang Gentzsch (Germany), Thomas Gross (USA), Gaetan Hains (Canada), Guenter Haring (Austria), Hiroki Honda (Japan), Hoon Hong (Austria), Friedel Hossfeld (Germany), Roland N. Ibbett (UK), Chris Jesshope (UK), Harry Jordan (USA), Peter Kacsuk (Hungary), Erich Kaltofen (USA), Hironori Kasahara (Japan), Wolfgang Kleinert (Austria), Wolfgang Kuechlin (Germany), Otto Lange (Germany), Michael A. Langston (USA), Allen D. Malony (USA), Alfonso Miola (Italy), Nikolay Mirenkov (Japan), Yoichi Muraoka (Japan), Philippe Navaux (Brasil), David A. Padua (USA), Cherri Pancake (USA), Dennis Parkinson (UK), Guy-Rene Perrin (France), Ron Perrott (UK), Bernard Philippe (France), Brigitte Plateau (France), Ramon Puigjaner (Spain), Michael J. Quinn (USA), Gerard L. Reijns (The Netherlands), Karl-Dieter Reinartz (Germany), Dirk Roose (Belgium), Wojciech Rytter (Poland), Stanislav G. Sedukhin (Japan), B. Sendov (Bulgaria), Othmar Steinhauser (Austria), Ondrej Sykora (Slovakia), Denis Trystram (France), Eugene Tyrtyshnikov (Russia), Mateo Valero (Spain), Marco Vanneschi (Italy), Paul Vitanyi (The Netherlands), Jens Volkert (Austria), Richard Wait (UK), Paul S. Wang (USA), Peter Zinterhof (Austria). Reply Form ========== We encourage you to reply via e-mail, giving us the information listed below. If you do not have the possibility to use e-mail, please copy the form below and send it to the conference address. ------------------------------- cut here --------------------------------------- CONPAR 94 - VAPP VI Reply Form Name: First Name: Title: Institution: Address: Telephone: Fax: E-Mail: Intentions (please check appropriate boxes) o I expect to attend the conference o I wish to present a paper o I wish to present at the exhibition (industrial / academic) ------------------------------------------------------------------------------ Siegfried GRABNER Tel: ++43-732-2468-884 (887) Dept. for Graphics and Parallel Processing Fax: ++43-732-2468-10 (GUP-Linz) Johannes Kepler University Email: Altenbergerstr.69, A-4040 Linz,Austria/Europe conpar94@gup.uni-linz.ac.at ------------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Lionel.Kervella@irisa.fr (Lionel Kervella) Subject: barrier synchronization and reduction on KSR1 Date: 30 Nov 1993 13:31:31 GMT Organization: IRISA, Campus de Beaulieu, 35042 Rennes Cedex, FRANCE I am looking for efficient implementations of barrier of synchronization and reduction operations in C language or assembler on the KSR1 machine. I am wondering if there is an another alternative than the use of primitives pthread_barrier_checkout and pthread_barrier_checkin to do efficient barrier of synchronization ? Moreover, i would like to know if the TCGMSG library provides good reductions. If someone have already use it, i will appreciate his comments. Thank you for your help. Lionel Kervella. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: trachos@uni-paderborn.de (Konstantin Trachos) Subject: References about Topic "hot spot" ? Date: 30 Nov 1993 14:18:50 GMT Organization: Uni-GH Paderborn, Germany Nntp-Posting-Host: sunflower.uni-paderborn.de Hi, I need some pointers to previous work done concerning the appearance or avoidance of hot spots in parallel programs. Any information will be welcome. Thanks for any help. I will summarize if there is interest. -- Konstantin Trachos email: trachos@dat.uni-paderborn.de -------------------------------------------------------------------------------- <> Jules Lemantre (1853 - 1914) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DSPWorld@world.std.com (Amnon Aliphas) Subject: "CALL FOR PAPERS - ICSPAT'94 - DSP WORLD EXPO" Organization: The World Public Access UNIX, Brookline, MA Date: Tue, 30 Nov 1993 16:38:09 GMT Apparently-To: uunet!comp-parallel CALL FOR PAPERS - ICSPAT '94 International Conference on Signal Processing Applications & Technology featuring DSP World Expo. October 18-21,1994 Grand Kempinski Hotel - Dallas, Texas ITF Product Reviewer ~~~~~~~~~~~~~~~~~~~~ | Application Areas: Aerospace Mr. Nicolas Mokhoff | Electronic Eng. Times | Audio USA | | Automotive Technical Review Comm. | ~~~~~~~~~~~~~~~~~~~~~~ | Communications Dr. David Almagor | National Semiconductor | Consumer Products Israel | | DSP Machines Mr. Pradeep Bardia | Sonitech International | DSP Software USA | | DSP Technology Dr. Aziz Chihoub | Siemens Corporate Res. | Geophysics USA | | Image Processing Dr. Ron Crochiere | AT&T Bell Laboratories | Industrial Control USA | | Instrumentation & Testing Dr. Mohamed El-Sharkawy | Indiana U./Purdue U. | Medical Electronics USA | | Multimedia Dr. Joseph B. Evans | University of Kansas | Neural Networks USA | | Parallel Processing Dr. Hyeong-Kyo Kim | ETRI | Processor Architectures Korea | | Radar Mr. Gerald McGuire | Analog Devices | Radio SATCOM & NAV USA | | Robotics Dr. Bruce Musicus | Bolt Beranek & Newman | Speech Processing USA | | Telephony Dr. Panos Papamichalis | Texas Instruments | Underwater/Sonar USA | | VLSI Architectures Mr. Robert A. Peloso | Panasonic, ATVL. | Virtual Reality USA | | & Other Applications Dr. Matt Perry | Motorola | USA | | Mail, Fax or E-Mail 400-Word Abstract by April15,1994 Dr. William Ralston | The Mitre Corporation | DSP ASSOCIATES Tel: (617) 964-3817 USA | 18 Peregrine Rd. Fax: (617) 969-6689 | Newton, MA 02159 Dr. James B. Riley | MIT - Lincoln Lab. | e_mail: DSPWorld@world.std.com USA | | Mr. Vojin Zivojnovic | RTWH | Germany | ______________________________________________________________________________ Sponsored by: DSP Associates -- Electronic Engineering Times ______________________________________________________________________________ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pchawla@vlsisun.ece.uc.edu (Praveen Chawla) Subject: Software Partitioning, Mapping and Assessment Tool ? Date: 30 Nov 93 12:31:55 Organization: Dept. of ECE, University of Cincinnati Nntp-Posting-Host: vlsisun.ece.uc.edu I am investigating the state-of-the-art in the area of partitioning, mapping and assessment of software for distributed and/or parallel computing. I will appreciate it if you could point me to relevant literature/commercial offering in this area. If there is interest, I will summarize on the net. Please respond by email to pchawla@mtl.com Thanks in advance Praveen Chawla Electronic Mail: pchawla@mtl.com Advanced Technology Programs Voice Mail: (513) 426-3832 ext 319 3481, Dayton-Xenia Road Phone: (513) 426-3111 Dayton, OH 45432 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ss7540@csc.albany.edu (SHUKLA SANDEEP) Subject: Re: concurrent langauges for teaching Organization: State University of New York at Albany In article <1993Nov24.133229.2463@hubcap.clemson.edu> davec@ecst.csuchico.edu (Dave Childs) writes: >I am trying to track down a good concurrent language for teaching concurrent >concepts. Something based on C, C++, or pascal would work nicely, but we >are open to other possibilities. You might like to try Distributed C freely available from the Universitat Muenchen and also another good system is PCN( It has more documentations and books) freely available from the Argonne National laboratory and Caltech. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mstalzer@pollux.usc.edu (Mark Stalzer) Subject: Re: looking for a Travelling Salesman code for CM Organization: University of Southern California, Los Angeles, CA References: <1993Nov24.133215.2390@hubcap.clemson.edu> I think you can solve this problem on a PC using Simulated Annealing as described in Numerical Recipes section 10.9. They describe a technique that can be modified to account for weather (the code given minimizes the number of crossing of the Mississippi). -- Mark ------------ Mark Stalzer, Hughes Research Labs RL65, 3011 Malibu Canyon Rd, Malibu, CA 90265E-Mail: stalzer@macaw.hrl.hac.com Voice: 310-317-5581 FAX: 310-317-5483 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel.pvm,comp.parallel,sci.math.num-analysis From: zook@technology.com (Brian Zook) Subject: parallel ODE solvers Message-Id: <1993Nov29.163149.8337@technology.com> Summary: Looking for source code Organization: Southwest Research Institute, San Antonio, Texas Date: Mon, 29 Nov 1993 16:31:49 GMT Apparently-To: uunet!comp-parallel I am looking for source code for solving systems of ODEs on parallel computing systems. I am particularly interested in clusters running PVM, but other architectures may at least point me in the right direction. I have conducted a literature search to find appropriate papers. And I have attempted to search various archives (such as netlib) using Archie as well as cyberspace using Gopher. But I have come up empty. Perhaps these types of routines are not archived, as they are a fairly recent development. Someone out there may have some available code that they have not released. I appreciate any aid you can give me. Postings, e-mail, phone calls, etc. are all welcome. Brian Zook Southwest Research Institute San Antonio, Texas bzook@swri.edu 210-522-3630 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.transputer From: saxena@tarski.albany.edu (Tushar Saxena) Subject: Sites for Research on Multi-Transputer Systems Organization: University At Albany, New York, USA Date: Tue, 30 Nov 93 20:29:25 GMT I would like to know about the places/univs/institutions in the US where active research is going on in the area of Multi-Transputer Systems. Thanks in advance. Tushar Saxena Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: nsrcjl@leonis.nus.sg (Jerry Lim Jen Ngee) Subject: High Performance Computing Conference '94 - Singapore Organization: National University of Singapore High Performance Computing Conference '94 - Singapore 'Challenges into the 21st century' FIRST ANNOUNCEMENT AND CALL FOR PAPERS 29 - 30 September 1994 Hyatt Regency Singapore Singapore High Performance Computing Conference '94 will bring together professionals in the field of high performance computing to present their achievements, discuss issues, and exchange experiences and ideas. It will be a showcase to gather and disseminate relev ant information on high performance computing. The conference will cover various topics of high performance computing such as parallel, distributed, cluster, and heterogeneous computing. These topics include, but are not limited to, the following: * High Performance Architectures * High Performance Data Communication Networks * High Performance IO and File Systems * HPC Applications and Case Studies * Languages, Compilers, and Operating Systems * Load Balancing * Parallel, Distributed, Cluster, and Heterogeneous Algorithms * Performance Modelling and Evaluation * Reliability and Fault Tolerance * Software Tools and Programming Environments * Theory of Parallel, Distributed, and Heterogeneous Computing * Visualization for HPC * Large Application Software Development for HPC Submissions Authors are invited to submit original unpublished papers. All submitted papers will be reviewed. The papers will be considered for either conference papers or poster papers. Please include the corespondent author's email, fax and postal address with the submission. All accepted papers will be published in the conference proceedings. The deadline for submission is April 15, 1994. Four copies of the manuscript of not more than 20 double-spaced pages, including figures and text, are to be sent to : Mrs. Evelyn Lau High Performance Computing Conference'94 National Supercomputing Research Centre National University of Singapore 81 Science Park Drive #04-03 The Chadwick Singapore Science Park Singapore 0511 Tel : (65) 7709 203 Fax : (65) 7780 522 E-mail : admin@nsrc.nus.sg. Important Dates Paper submission deadline : April 15, 1994 Notification of acceptance : June 15, 1994 Camera ready papers due : July 15, 1994 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Path: hertz.njit.edu!njl5453 From: njl5453@hertz.njit.edu (Nitin Lad ee) Subject: Cray T3D system...??.. Message-ID: <1993Nov30.031453.10522@njitgw.njit.edu> Sender: news@njit.edu Nntp-Posting-Host: hertz.njit.edu Organization: New Jersey Institute of Technology, Newark, New Jersey Date: Tue, 30 Nov 1993 03:14:53 GMT Does anyone know where I can find more detail info on the new CRAY T3D system development effort? I know that it is Alpha processor based supercomputer in various # processors and configurations. I would very much appreciate any pointers to ref materials(mag., articles, etc.) talks about hw architecture and soft. supp. for the new sys. Please reply to: njl5453@hertz.njit.edu Nitin Electrical Engineering N.J.I.T. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rick@cs.arizona.edu (Rick Schlichting) Subject: CFP: 13th Symp. on Reliable Distributed Systems Date: 30 Nov 1993 22:29:20 -0700 Organization: University of Arizona CS Department, Tucson AZ CALL FOR PAPERS 13th Symposium on Reliable Distributed Systems Oct. 25 (Tues), 1994 - Oct. 27 (Thurs), 1994 Dana Point, California SPONSORS: IEEE Computer Society TC on Distributed Processing IEEE Computer Society TC on Fault-Tolerant Computing IFIP WG 10.4 on Dependable Computing THEME: The theme of the symposium is reliability of distributed and parallel systems, including distributed applications, distributed operating systems, and distributed databases. Papers are sought that address the reliability, availability, security, and performance aspects of distributed and parallel systems. Papers that deal with experimental results, testbeds, development, and data from operational systems are of particular interest. TOPICS OF INTEREST: The following topics, as they relate to distributed and parallel systems, are of interest to the Symposium: - System-Level and Software Fault Tolerance - Fault-Tolerance Formalism - Database Systems - Operating Systems - Security - Experimental Systems with High Reliability Mechanisms - Object-Oriented Systems - Transaction Processing Systems - Performance and Reliability Modeling - Programming Language Support for Reliable Computing - Real-Time Fault-Tolerance PAPER SUBMISSIONS: Papers must be written in English and printed using at least 11-point type and 1-1/2 line spacing. They should be no more than 20 pages in manuscript, including figures. Authors are requested to submit five copies of their manuscript by March 15, 1994 to: Prof. Richard D. Schlichting Department of Computer Science Gould-Simpson Building The University of Arizona Tucson, AZ 85721, USA +1-602-621-4324 rick@cs.arizona.edu Authors will be notified by June 1, 1994. Final camera-ready copies are due July 9, 1994. AWARDS: The Wing Toy Best Student Paper Award, carrying a monetary award, will be given to the best student paper accepted for the Symposium. A paper is eligible for the award only if (1) it will be presented at the Symposium by a student co-author, and (2) the research it presents is essentially the work of the student co-authors and the involvement of the non-student co-authors was restricted to advising the student co-authors. The detailed Award rules will be provided to the authors of the accepted papers. TUTORIALS: Persons interested in teaching a half-day or full-day tutorial on topics related to the theme of the symposium are encouraged to submit a proposal with a brief syllabus by March 15, 1994 to: Dr. Devesh Bhatt Honeywell Systems & Research Center 3660 Technology Drive MN65-2100 Minneapolis, MN 55418, USA +1-612-951-7316 bhatt@src.honeywell.com ===================================================================== SYMPOSIUM CO-CHAIRS: Kane Kim University of California, Irvine Algirdas Avizienis University of California, Los Angeles PROGRAM COMMITTEE CHAIR: Richard D. Schlichting University of Arizona PROGRAM COMMITTEE: A. Abouelnaga (TRW) F. Bastani (Univ. of Houston) B. Bhargava (Purdue Univ.) R. Bianchini (Carnegie-Mellon Univ.) F. Cristian (Univ. of California, San Diego) M. Dal Cin (Univ. of Erlangen) S. Davidson (Univ. of Pennsylvania) J. Bechta Dugan (Univ. of Virginia) K. Fuchs (Univ. of Illinois) M. Hecht (SoHaR, Inc.) F. Jahanian (Univ. of Michigan) S. Jajodia (George Mason Univ.) D. Johnson (Carnegie-Mellon Univ.) T. Kikuno (Osaka Univ.) J. Kim (Texas A&M Univ.) E. Nett (GMD) C. Pu (Oregon Graduate Institute) K. Ramamrithan (Univ. of Massachusetts) W. Sanders (Univ. of Arizona) L. Simoncini (Univ. of Pisa) D. Taylor (Univ. of Waterloo) P. Versissimo (INESC) TUTORIALS CO-CHAIRS: Devesh Bhatt Honeywell Systems and Research Center Gary Craig Syracuse University FINANCE CHAIR: I-Ling Yen Michigan State University LOCAL ARRANGEMENTS CO-CHAIRS: Douglas Blough University of California, Irvine Kwei-Jay Lin University of California, Irvine PUBLICITY CO-CHAIRS: Chandra Kintala AT&T Bell Labs Tom Lawrence Rome Labs Raif Yanney TRW REGISTRATION CHAIR: Luiz Bacellar University of California, Irvine AWARDS CO-CHAIRS: Leszek Lilien AT&T Bell Labs Arthur Toy NCR TC LIAISON: Bharat Bhargava Purdue University Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on TMC CM-5, Intel iPSC/860, Intel Paragon Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: boxer.nas.nasa.gov Organization: NAS/NASA-Ames Research Center Date: Wed, 1 Dec 1993 08:00:16 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov J. Eric Townsend (jet@nas.nasa.gov) last updated: 29 Nov 1993 (updated mailing addresses) This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-managers -- discussion of administrating the TMC CM-5 cm5-users -- " " using the TMC CM-5 ipsc-managers -- " " administrating the Intel iPSC/860 ipsc-users -- " " using the Intel iPSC/860 paragon-managers -- " " administrating the Intel Paragon paragon-users -- " " using the Intel Paragon The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kaylan%TRBOUN.BITNET@FRMOP11.CNUSC.FR Reply-To: kaylan%TRBOUN.BITNET@FRMOP11.CNUSC.FR Subject: ESS'94 - Call For Papers ESS'94 EUROPEAN SIMULATION SYMPOSIUM CALL FOR PAPERS ISTANBUL, TURKEY OCTOBER 9-12, 1994 HOSTED BY BOGAZICI UNIVERSITY Organized and sponsored by: The Society for Computer Simulation International (SCS) With cooperation of: The European Simulation Council (ESC) Ministry of Industry and Trade, Turkey Operational Research Society of Turkey (ORST) Cosponsored by: Bekoteknik Digital Hewlett Packard IBM Turk Main Topics: * Advances in Simulation Methodology and Practices * Artificial Intelligence in Simulation * Innovative Simulation Technologies * Industrial Simulation * Computer and Telecommunication Systems CONFERENCE COMMITTEE Conference Chairman: Prof. Dr. Tuncer I. Oren University of Ottawa, Computer Science Department, 150 Louis Pasteur / Pri., Ottawa, Ontario, Canada K1N 6N5 Phone: 1.613.654.5068 Fax: 1.613.564.7089 E-mail: oren@csi.uottawa.ca Program Chairman: Prof. Dr. Ali Riza Kaylan Bogazici University, Dept.of Industrial Engineering, 80815 Bebek, Istanbul, Turkey Phone: 90.212.2631540/2072 Fax: 90.212.2651800 E-Mail: Kaylan@trboun.bitnet Program Co-chairman: Prof. Dr. Axel Lehmann Universitaet der Bundeswehr, Munchen, Institut fur Technische Informatik, Werner-Heisenberg-Weg 39, D 85577 Neubiberg, Germany. Phone: 49.89.6004.2648/2654 Fax: 49.89.6004.3560 E-Mail: Lehmann@informatik.unibw-muenchen.de Finance Chairman: Rainer Rimane, University of Erlangen - Nurnberg Organization Committee: Ali Riza Kaylan, Yaman Barlas, Murat Draman, Levent Mollamustafaoglu, Tulin Yazgac International Program Committee (Preliminary): O. Balci, USA J. Banks, USA G. Bolch, Germany R. Crosbie, USA B. Delaney, USA M. S. Elzas, Netherlands H. Erkut, Turkey A. Eyler, Turkey P. Fishwick, USA E. Gelenbe, USA A. Guasch, Spain M. Hitz, Austria R. Huntsinger, USA G. Iazeolla, Italy K. Irmscher, Germany K. Juslin, Finland A. Javor, Hungary E. Kerckhoffs, Netherlands J. Kleijnen, Netherlands M. Kotva, Czech Rep. M. Koksalan, Turkey M. L. Pagdett, USA M. Pior, Germany R. Reddy, USA S. Reddy, USA B. Schmidt, Germany S. Sevinc, Australia H. Szczerbicka, Germany S. Tabaka, Japan O. Tanir, Canada G. Vansteenkiste, Belgium M. Wildberger, USA S. Xia, UK R. Zobel, UK CONFERENCE INFORMATION The ESS series (organized by SCS, the Society for Computer Simulation International) is now in its fifth year. SCS is an international non-profit organization founded in 1952. On a yearly basis SCS organizes 6 Simulation Conferences worldwide, cooperates in 2 others, and publishes the monthly magazine Simulation, a quarterly Transactions, and books. For more information, please tick the appropriate box on the reply card. During ESS'94 the following events will be presented besides the scientific program: Professional Seminars The first day of the conference is dedicated to professional seminars, which will be presented for those interested participants to expose the state-of-art overview of each of the five main themes of this conference. Participation fee is included in the conference registration fee. If you have suggestions for other advanced tutorial topics, please contact one of the program chairmen. Exhibits An exhibition will be held in the central hall where all participants meet for coffee and tea. There will be a special exhibition section for universities and non-profit organizations, and a special section for publishers and commercial stands. If you would like to participate in the exhibition, please contact the SCS European Office. Vendor Sessions, Demonstrations and Video Presentations For demonstrations or video sessions, please contact SCS International at the European Office. Special sessions within the scientific program will be set up for vendor presentations. Other Organized Meetings Several User Group meetings for simulation languages and tools will be organized on Monday. It is possible to have other meetings on Monday as well. If you would like to arrange a meeting, please contact the Conference Chairman. We will be happy to provide a meeting room and other necessary equipment. VENUE Istanbul, the only city in the world built on two continents, stands on the shores of the Istanbul Bogazi (Bosphorus) where the waters of the Black Sea mingle with those of the Sea of Marmara and the Golden Horn. Here on this splendid site, Istanbul guards the precious relics of three empires of which she has been the capital; a unique link between East and West, past and present. Istanbul has infinite variety: museums, ancient churches, palaces, great mosques, bazaars and the Bosphorus. However long you stay, just a few days or longer, your time will be wonderfully filled in this unforgettable city. Bogazici University, which will host ESS'94 has its origins in Robert College, first American College founded outside of the United States in 1863. It has a well deserved reputation for academic excellence and accordingly attracts students from among the best and brightest in Turkey. The University is composed of four faculties, six institutes (offering graduate programs), and two other schools. The conference location is Istanbul Dedeman, an international five star hotel, which is located in the center of the city with a spectacular view of the Bosphorus. It is in a very close district to the most of the historical places as well as to the business center. For the conference participants the single room special rate is 65 US dollars. SCIENTIFIC PROGRAM The 1994 SCS European Simulation Symposium is structured around the following five major themes. A parallel track will be devoted to each of the five topics. The conference language is English. * Advances in Simulation Methodology and Practices, e.g.: - Advanced Modelling, Experimentation, and Output Analysis and Display - Object-Oriented System Design and Simulation - Optimization of Simulation Models - Validation and Verification Techniques - Mixed Methodology Modelling - Special Simulation Tools and Environments * Artificial Intelligence in Simulation, e.g.: - Knowledge-based Simulation Environments and Knowledge Bases - Knowledge-based System Applications - Reliability Assurance through Knowledge-based Techniques - Mixed Qualitative and Quantitative Simulation - Neural Networks in Simulation * Innovative Simulation Technologies: - Virtual Reality - Multimedia Applications * Industrial Simulation, e.g. Simulation in: - Design and Manufacturing, CAD, CIM - Process Control - Robotics and Automation - Concurrent Engineering, Scheduling * Computer and Telecommunication Systems, e.g.: - Circuit Simulation, Fault Simulation - Computer Systems - Telecommunication Devices and Systems - Networks INVITED SPEAKERS Focusing on the main tracks of the conference, invited speakers will give special in-depth presentations in plenary sessions, which will be included in the proceedings of the conference. BEST PAPER AWARDS The 1994 European Simulation Symposium will award the best five papers, one in each of the five tracks. From these five papers, the best overall paper of the conference will be chosen. The awarded papers will be published in an International Journal, if necessary after incorporating modifications in the paper. DEADLINES AND REQUIREMENTS Extended abstracts (300 words, 2-3 pages for full and 150 words, 1 page for short papers typewritten without drawings and tables) are due to arrive in QUADRUPLICATE at the office of Ali Riza Kaylan, at the Industrial Engineering Department of Bogazici University, TURKEY before March 1, 1994. Only original papers, written in English, which have not previously been published elsewhere will be accepted. In case you want to organize a panel discussion, please contact the program chairmen. Authors are expected to register early (at a reduced fee) and to attend the conference at their own expense to present the accepted papers. If early registration and payment are not made, the paper will not be published in the conference proceedings. In the case of multi-authors, one author should be identified as the person who will act as correspondent for the paper. Abstracts will be reviewed by 3 members of the International Program Committee for full papers and one member for short papers. Notification of acceptance or rejection will be sent by April 30, 1994. An author kit with complete instruction for preparing a camera-ready copy for the proceedings will be sent to authors of accepted abstracts. The camera-ready copy of the papers must be in by July 15, 1994. Only the full papers, which are expected to be 5-6 pages long, will be published in the conference proceedings. In order to guarantee a high-quality conference, the full papers will be reviewed as well, to check whether the suggestions of the program committee have been incorporated. The nominees for the best paper awards will be selected as well. REGISTRATION FEE Author SCS members Other participants ----------------------------------------------- Registration before BF 15000 BF 15000 BF 17000 August 31, 1994 (375 ECU) (375 ECU) (425 ECU) Registration after Preregistration BF 17000 BF 20000 August 31, 1994 required (425 ECU) (500 ECU) or at the conference The registration fee includes one copy of the Conference Proceedings, attending professional seminars, coffee and tea during the breaks, all lunches, a welcome cocktail and the conference dinner. CORRESPONDENCE ADDRESS Philippe Geril The Society for Computer Simulation, European Simulation Office, University of Ghent Coupure Links 653, B-9000 Ghent, Belgium. Phone (Office): 32.9.233.77.90 Phone (Home): 32.59.800.804 Fax (Office): 32.9.223.49.41 E-Mail: Philippe.Geril@rug.ac.be REPLY CARD Family Name: First Name: Occupation and/or Title: Affiliation: Mailing Address: Zip: City: Country: Telephone: Fax: E-mail: Yes, I intend to attend the European Simulation Symposium ESS'94: o Proposing a paper o Proposing a panel discussion o Participating a vendor session o Contributing to the exhibition o Without presenting a paper The provisional title of my paper / poster / exhibited tool is: With the following topics: The paper belongs to the category (please tick one): o Advances in Simulation Methodology and Practices o Artificial Intelligence in Simulation o Innovative Simulation Technologies o Industrial Simulation o Computer and Telecommunication Systems The paper will be submitted as a: o Full paper o Short Paper o Poster session o Demonstration Other colleague(s) interested in the topics of the conference is/are: Name: Address: Name: Address: If you would like to receive more information about SCS and its activities, please tick the following box: o YES, I would to know more about SCS. Please mail this card immediately to: Philippe Geril, The Society for Computer Simulation, European Simulation Office University of Ghent, Coupure Links 653, B-9000 Ghent, Belgium. ============================================================================= Prof.Dr. Ali R. Kaylan Director of Computer Center Bogazici University e-mail: Kaylan@Trboun.Bitnet Dept. of Industrial Eng'g. fax-no: (90-1)265 63 57 or (90-1)265 93 62 Bebek 80815 phone: (90-1)265 93 62 Istanbul, TURKIYE phone: (90-1)263 15 40 ext. 1445,1727,1407 ============================================================================= Newsgroups: news.announce.conferences,comp.arch,comp.sys.super Path: aoi!nakasima From: nakasima@kuis.kyoto-u.ac.jp (Hiroshi Nakashima) Subject: Re: CFP: 8th ACM International Conference on Supercomputing (ICS'94) In-Reply-To: cdp@sp95.csrd.uiuc.edu's message of Mon, 29 Nov 1993 19:18:41 GMT Sender: news@kuis.kyoto-u.ac.jp Nntp-Posting-Host: aoi.kuis.kyoto-u.ac.jp Reply-To: nakasima@kuis.kyoto-u.ac.jp Organization: Dept. of Info. Sci., Kyoto Univ., JAPAN References: <1993Nov29.191841.23054@sparky.sterling.com> Date: Tue, 30 Nov 1993 01:25:38 GMT Apparently-To: comp-parallel@uunet.uu.net In article <1993Nov29.191841.23054@sparky.sterling.com> cdp@sp95.csrd.uiuc.edu (Constantine Polychronopoulos) writes: > 8th ACM International Conference on Supercomputing > ================================================== > > Manchester, England, July 11-15, 1994 > ===================================== > : > : > > Architecture -- Professor Shinji Tomita, Department of Information > Science, Kyoto University, Yoshida-hon-machi, Kyoto, Japan, T606. > tomita@kuis.kyoto-u.ac.jp. +81-75-753-5393. We would like to ask you authors to send your papers on architectural topics NOT TO THE ADDRESS ABOVE, BUT TO: ics94arc@kuis.kyoto-u.ac.jp so that we can pick up submission mails easily. We also ask you to attach information for contact to PS manuscript, preferably in the enclosed format. Best Regards, Prof. Shinji Tomita Prof. Hiroshi Nakashima Vice Program Chair (Architecture) Dept. of Information Science Dept. of Information Science Kyoto University Kyoto University --- % Fill the following and attach it to PS manuscript, and send them % together to ics94arc@kuis.kyoto-u.ac.jp % \title{} % title of the paper \authors{}{} % author's name and affiliation % % if two or more authors, duplicate this entry like; % % \authors{1st author's name}{1st author's affi.} % % \authors{2nd author's name}{1st author's affi.} % % : % % \authors{n-th author's name}{n-th author's affi.} \name{} % name of the person for further contact \zip{} % zip code and/or country name \address{} % surface mail address \organization{} % organization name \section{} % section name \tel{} % phone number \fax{} % facsimile number \email{} % e-mail address % % The following is an example % \title{The Architecture of a Massively Parallel Computer} % \authors{Shinji Tomita}{Kyoto Univ.} % \authors{Hiroshi Nakashima}{Kyoto Univ.} % \name{Shinji Tomita} % \zip{606-01, JAPAN} % \address{Yoshida Hon-Machi, Sakyou-Ku, Kyoto} % \organization{Kyoto University} % \section{Dept. of Information Science} % \tel{+81-75-753-5373} % \fax{+81-75-753-5379} % \email{tomita@kuis.kyoto-u.ac.jp} Date: Mon, 29 Nov 93 22:15:00 -0500 From: news@njitgw.njit.edu (USENET News System) To: comp-parallel@rutgers.edu Newsgroups: comp.arch,comp.sys.dec,comp.sys.intel Path: hertz.njit.edu!njl5453 From: njl5453@hertz.njit.edu (Nitin Lad ee) Subject: Cray T3D system...??.. Message-ID: <1993Nov30.031453.10522@njitgw.njit.edu> Sender: news@njit.edu Nntp-Posting-Host: hertz.njit.edu Organization: New Jersey Institute of Technology, Newark, New Jersey Date: Tue, 30 Nov 1993 03:14:53 GMT Does anyone know where I can find more detail info on the new CRAY T3D system development effort? I know that it is Alpha processor based supercomputer in various # processors and configurations. I would very much appreciate any pointers to ref materials(mag., articles, etc.) talks about hw architecture and soft. supp. for the new sys. Please reply to: njl5453@hertz.njit.edu Nitin Electrical Engineering N.J.I.T. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.multimedia,comp.parallel From: hawaii!lc@uunet.UU.NET (Laurent Censier) Subject: **** MATISSE DBMS Presentation **** Sender: hawaii!lc@uunet.UU.NET (Laurent Censier) Organization: ADB, Inc. Date: Wed, 1 Dec 1993 17:10:29 GMT Apparently-To: uunet!comp-parallel -------------------------------------------------------------------- | MATISSE: An Industrial Strength Object Database Management System | -------------------------------------------------------------------- I) MATISSE Technology ------------------ MATISSE is a new generation database management system which combines advanced object oriented modeling capabilities with a mission critical transactional technology. It is designed to meet the needs of complex database applications which require any, or all, of the following: large volumes of complex, inter-related, multi-typed information, fault tolerance, ensured data consistency and integrity, high speed OLTP access, high through-put and optimal performance, historical object management and intelligent modeling of constraints. MATISSE complies with the traditional paradigm, where the database management system is the foundation of the information system, providing common and concurrent data storage and retrieval capabilities to multiple applications. MATISSE supports heterogeneous client-server architectures. All its functionalities can be used regardless of the development environment used to build the applications, whether it is object oriented or not. Depending on the users needs, MATISSE can be used according to the two state-of-the-art database paradigms: the C/C++ API allows the user to access MATISSE as a highly flexible language independent object oriented database, MATISSE SQL provides a view of MATISSE as an extension of a relational model. The predecessor of MATISSE was G-BASE to which major enhancements were made. II) The Company ----------- ADB is the fully-owned US subsidiary of a French Software Publisher dedicated to develop, market, and support a new generation database management system, MATISSE. ADB also provides consulting and services assisting its customers and partners in the use of the MATISSE technology. III) MATISSE Features ---------------- Performance ----------- - Symmetric, Fine Grain, Multi-Threaded Architecture - Parallel and Asynchronous Disk I/O - Automatic Disk Optimization through Dynamic Clustering - High Speed OLTP Environment Reliability ----------- - 24 Hour - Mission Critical Operation - Media Fault Tolerant (Object Replication) - Transparent On-line Recovery - Several Months of Extensive and Demanding Tests Prior to Each Release Database Administration ----------------------- - Full On-line Administration (No Down Time) - On-line Incremental or Full Back-Up - Dynamically Increase Database Size - On-line - Full On-line Monitoring Data Management and Consistency ------------------------------- - Dynamic Schema Evolution - Consistent Database Reads without Locking - Historical Versioning, both Schema and Data Objects - Built-in Enforced Referential Integrity - Object Level Implicit or Explicit Locking Scalability ----------- - Hundreds of Concurrent On-line Users - Hundreds of Gigabytes Per Database - From Few Bytes to Four Gigabytes for Each Object - Up to Four Giga-objects Per Database Object Model ------------ - Full Object Oriented Model - User Extensible Object Meta-Schema - Support for Complex, Highly Dynamic, Variable Sized Objects - Multiple Inheritance Intelligent Objects ------------------- - Triggers at Object, Attribute, or at Relationship Level - Consistency Rules at Object, Attribute, or at Relationship Level - Customizable Intelligent Object Indexing - Automatic Inverse Relationships Open Systems ------------ - Open C, C++ API - Supports Any Commercial Development Tool and Language - No Proprietary Tool Required - Heterogeneous Cross Platform Client/Server Architecture - SQL Query Mechanism, currently in Beta IV) MATISSE Products and Services ----------------------------- Product Suite ------------- - MATISSE Client Object Oriented Services Libraries (API) - MATISSE Server Engine Libraries (API) - MATISSE Server Engine - MATISSE DB Administrator Tools - MATISSE Object Editor - MATISSE Object Browser Platforms Supported ------------------- - Sun Sparcstation - SunOS 4.1.3 - Sun Sparcstation - Solaris, Delivery Q2, 1994 - VAX - VMS, Server - HP9000 - HP-UX, Delivery in Q1, 1994 - Windows NT Client, Delivery in Q1, 1994 - Windows 3.1 Client, Delivery in Q1, 1994 - KSR1 High Performance Parallel Architecture, Delivery Q1, 1994 Services -------- - Product Training and Support - Object-Oriented Database Consulting - Object-Oriented Analysis and Design For additional information on MATISSE, contact ---------------------------------------------- In the UNITED STATES: ADB, Inc. 238 Broadway Cambridge, MA 02139 - USA Phone: 1 (617) 354-4220 Fax: 1 (617) 547-5420 Email: info@adb.com dan@adb.com In EUROPE: ADB/Intellitic SA. 12/14, rue du Fort de Saint Cyr Montigny le Bretonneux 78182 Saint Quentin en Yvelines Cedex - FRANCE Phone: 33 (1) 30 14 54 35 Fax: 33 (1) 30 14 54 40 Email: pmo@intellitic.fr In JAPAN: ADB/Intellitic SA. c/o SGN Co., LTD Urban Toranomon Building - 1-16-4 Toranomon Minato-Ku Tokyo 105 - JAPAN Phone: 81 (3) 3593.34.31 Fax: 81 (3) 3593.34.32 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Thu, 2 Dec 1993 09:34:53 -0500 From: takis@wilcox.wellesley.edu (Takis Metaxas) Subject: Tenure track position at Wellesley College Wellesley College Computer Science Department TENURE TRACK POSITION Wellesley College is seeking applicants for a tenure-track assistant professorship in computer science. Applicants should have a Ph.D. in computer science or be close to its completion. We especially encourage candidates in computer systems and architecture, but all areas of specialty will be considered. Wellesley College is a private, liberal arts college for women that places heavy emphasis on excellence in teaching as well as research. The college offers both a major and a minor in computer science, and has a cross-registration program with M.I.T. that dramatically increases the resources and curricular options available to its faculty and students. Wellesley College has a networked campus. Academic computers on the campus-wide network include a VAX cluster, SUN, DEC and IBM RISC workstations, and a Macintosh classroom that is dedicated to computer science instruction. In addition, Apple Macintosh and IBM PCs and compatibles are distributed around campus in microcomputer labs, faculty offices and research labs. Wellesley also has a high-speed connection to the Internet. Located thirteen miles west of Boston near a high concentration of computer research institutions, Wellesley provides a unique combination of access to a major urban center and consulting opportunities while retaining a sense of the country by its 500-acre campus, lake and woodlands. Candidates interested in this position should submit a curriculum vitae and arrange for three letters of recommendation to be sent to: Ellen C. Hildreth, Chair Department of Computer Science Wellesley College Wellesley, MA 02181 ehildreth@lucy.wellesley.edu FAX: 617-283-3642 Wellesley College is an equal opportunity/affirmative action employer and welcomes applications from women and minority candidates. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ambaker@iastate.edu (Anthony M Baker) Subject: ncube MFLOP rating Keywords: fortran, ncube, mflop Sender: news@news.iastate.edu (USENET News System) Organization: Iowa State University, Ames IA Date: Wed, 1 Dec 1993 19:41:15 GMT Apparently-To: comp-parallel@beaver.cs.washington.edu I need to get an MFLOP rating for a fortran code. I already have the timing information, but I'd rather not spend a week counting up the floating point operations. Does anyone know of a utility that does this automatically? Thanks for the help! Anthony Baker -- v~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~v | Anthony Baker | | Aerospace Engineering ambaker@iastate.edu | | 112 Town Engineering baker@tityus.scl.ameslab.gov | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ashok Gupta Date: Thu, 2 Dec 93 10:09:43 UTC Subject: Announcement - General Purpose Parallel Computing The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing A One Day Open Meeting with Invited and Contributed Papers 22 December 1993, University of Westminster, London, UK Invited speakers : Les Valiant, Harvard University Bill McColl, PRG, University of Oxford, UK David May, Inmos, UK A key factor for the growth of parallel computing is the availability of port- able software. To be portable, software must be written to a model of machine performance with universal applicability. Software providers must be able to provide programs whose performance will scale with machine and application size according to agreed principles. This environment presupposes a model of paral- lel performance, and one which will perform well for irregular as well as regu- lar patterns of interaction. Adoption of a common model by machine architects, algorithm & language designers and programmers is a precondition for general purpose parallel computing. Valiant's Bulk Synchronous Parallel (BSP) model provides a bridge between appli- cation, language design and architecture for parallel computers. BSP is of the same nature for parallel computing as the Von Neumann model is for sequential computing. It forms the focus of a project for scalable performance parallel architectures supporting architecture independent software. The model and its implications for hardware and software design will be described in invited and contributed talks. The PPSG, founded in 1986, exists to foster development of parallel architec- tures, languages and applications & to disseminate information on parallel pro- cessing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact ei- ther of the following : Chair : Mr. A. Gupta Membership Secretary: Dr. N. Tucker Philips Research Labs, Crossoak Lane, Paradis Consultants, East Berriow, Redhill, Surrey, RH1 5HA, UK Berriow Bridge, North Hill, Nr. Launceston, gupta@prl.philips.co.uk Cornwall, PL15 7NL, UK Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing 22 December 1993, Fyvie Hall, 309 Regent Street, University of Westminster, London, UK Provisional Programme 9 am-10 am Registration & Coffee L. Valiant, Harvard University, "Title to be announced" W. McColl, Oxford University, Programming models for General Purpose Parallel Computing A. Chin, King's College, London University, Locality of Reference in Bulk-Synchronous Parallel Computation P. Thannisch et al, Edinburgh University, Exponential Processor Requirements for Optimal Schedules in Architecture with Locality Lunch D. May, Inmos "Title to be announced" R. Miller, Oxford University, A Library for Bulk Synchronous Parallel Programming C. Jesshope et al, Surrey University, BSPC and the N-Computer Tea/Coffee P. Dew et al, Leeds University, Scalable Parallel Computing using the XPRAM model S. Turner et al, Exeter University, Portability and Parallelism with `Lightweight P4' N. Kalentery et al, University of Westminster, From BSP to a Virtual Von Neumann Machine R. Bisseling, Utrecht University, Scientific Computing on Bulk Synchronous Parallel Architectures B. Thompson et al, University College of Swansea, Equational Specification of Synchronous Concurrent Algorithms and Architectures 5.30 pm Close Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on General Purpose Parallel Computing, London, December 22 1993, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch, refreshments and proceedings) are (in pounds sterling) : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: _________________ Date: _________________ Day time telephone: ________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone 0734 665570) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jonathan@CS.HUJI.AC.IL (Jonathan Maresky) Subject: Financial applications on Supercomputers Date: 2 Dec 1993 14:00:15 GMT Organization: Hebrew University of Jerusalem Nntp-Posting-Host: mangal.cs.huji.ac.il Hi all, I know that this is the wrong place: either reply or send this somewhere else... I'm looking for financial simulations on supercomputers, more specifically market predictions (stock, currency, futures, I'm not fussy). It's for an impending seminar and so far I've mostly drawn blanks. In fact it could be any computationally intensive financial simulation, not necessarily on a supercomputer. Please mail with ideas, references, etc. Thanks in advance, Jonathan Maresky Institute of Computer Science Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.super,news.announce.conferences,ch.general,comp.parallel From: rehmann@cscs.ch (Rene M. Rehmann) Subject: EXTENDED DEADLINE: IFIP WG10.3 conference Message-ID: <1993Dec2.163300.17315@cscs.ch> Keywords: massive parallelism, programming, tools, working conference, CFP Sender: usenet@cscs.ch (NEWS Manager) Nntp-Posting-Host: vevey.cscs.ch Reply-To: rehmann@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland Date: Thu, 2 Dec 1993 16:33:00 GMT CALL FOR PAPERS EXTENDED DEADLINE IFIP WG10.3 WORKING CONFERENCE ON PROGRAMMING ENVIRONMENTS FOR MASSIVELY PARALLEL DISTRIBUTED SYSTEMS April 25 - 30, 1994 Monte Verita, Ascona, Switzerland Massively parallel systems with distributed resources will play a very important role for the future of high performance computing. One of the current obstacles of these systems is their difficult programming. The proposed conference will bring together active researchers who are working on ways how to help programmers to exploit the performance potential of massively parallel systems. The working conference will consist of sessions for full and short papers, interleaved with poster and demonstration sessions. The Conference will be held April 25 - 30, 1994 at the Centro Stefano Franscini, located in the hills above Ascona at Lago Maggiore, in the southern part of Switzerland. It is organized by the Swiss Scientific Computing Center CSCS ETH Zurich. The conference is the forthcoming event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) on Programming Environments for Parallel Computing. The conference succeeds the 1992 Edinburgh conference on Programming Environments for Parallel Computing. SUBMISSION OF PAPERS Submission of papers is invited in the following areas: -- Programming models for parallel distributed computing -- Computational models for parallel distributed computing -- Program transformation tools -- Concepts and tools for the design of parallel distributed algorithms -- Reusability in parallel distributed programming -- Concepts and tools for debugging massively parallel systems (100+ processing nodes) -- Concepts and tools for performance monitoring of massively parallel systems (100+ processing nodes) -- Tools for application development on massively parallel systems -- Support for computational scientists: what do they really need ? -- Application libraries (e.g., BLAS, etc.) for parallel distributed systems: what do they really offer ? -- Problem solving environments for parallel distributed programming Authors are invited to submit complete, original, papers reflecting their current research results. All submitted papers will be refereed for quality and originality. The program committee reserves the right to accept a submission as a long, short, or poster presentation paper. Manuscripts should be double spaced, should include an abstract, and should be limited to 5000 words (20 double spaced pages); The contact authors are requested to list e-mail addresses if available. Fax or electronic submissions will not be considered. Please submit 5 copies of the complete paper to the following address: PD Dr. Karsten M. Decker IFIP 94 CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland IMPORTANT DATES Deadline for submission: December 17, 1993 Notification of acceptance: February 15, 1994 Final versions: March 15, 1994 CONFERENCE CHAIR Karsten M. Decker CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8233 fax: +41 (91) 50 6711 e-mail: decker@serd.cscs.ch ORGANIZATION COMMITTEE CHAIR Rene M. Rehmann CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8234 fax: +41 (91) 50 6711 e-mail: rehmann@serd.cscs.ch PROGRAM COMMITTEE Francoise Andre, IRISA, France Thomas Bemmerl, Intel Corporation, Germany Arndt Bode, Technical University Muenchen, Germany Helmar Burkhart, University Basel, Switzerland Lyndon J. Clarke, University of Edinburgh, UK Michel Cosnard, Ecole Normale Superieure de Lyon, France Karsten M. Decker, CSCS-ETH Zurich, Switzerland Thomas Fahringer, University of Vienna, Austria Claude Girault, University P.et M. Curie Paris, France Anthony J. G. Hey, University of Southhampton, UK Roland N. Ibbett, University of Edinburgh, UK Nobuhiko Koike, NEC Corporation, Japan Peter B. Ladkin, University of Stirling, UK Juerg Nievergelt, ETH Zurich, Switzerland Edwin Paalvast, TNO-TPD, The Netherlands Gerard Reijns, Delft University of Technology, The Netherlands Eugen Schenfeld, NEC Research Institute, USA Clemens-August Thole, GMD, Germany Owen Thomas, Meiko, UK Marco Vanneschi, University of Pisa, Italy Francis Wray, Cambridge, UK MONTE VERITA, ASCONA, SWITZERLAND Centro Stefano Franscini, Monte Verita, located in the scenic hills above Ascona, with a beautiful view on Lago Maggiore, has excellent conference and housing facilities for about sixty participants. Monte Verita enjoys a sub-alpine/mediterranean climate with mean temperatures between 15 and 18 C in April. The closest airport to Centro Stefano Franscini is Lugano-Agno which is connected to Zurich, Geneva and Basle and many other cities in Europe by air. Centro Stefano Franscini can also be reached conveniently by train from any of the three major airports in Switzerland to Locarno by a few hours scenic trans-alpine train ride. It can also be reached from Milano in less than three hours. For more information, send email to ifip94@cscs.ch Karsten M. Decker and Rene M. Rehmann --- Rene M. Rehmann phone: +41 (91) 50 8234 Section of Research and Development (SeRD) fax : +41 (91) 50 6711 Swiss Scientific Computing Center CSCS email: rehmann@cscs.ch Via Cantonale, CH-6928 Manno, Switzerland Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: yates@rafael.llnl.gov (Kim Yates) Subject: I/O on parallel machines Date: 2 Dec 1993 18:01:56 GMT Organization: Lawrence Livermore National Laboratory Nntp-Posting-Host: rafael.llnl.gov Are you satisfied with the performance and usability of I/O on parallel computers? I'm a researcher at Livermore rather loosely affiliated with the National Storage Laboratory here, and I'm trying to evaluate the current state of the art. I'm interested in all aspects of parallel I/O: hardware, software, performance, programmability, portability, you name it! If you have an I/O intensive application running on any sort of parallel computer, I'd like to hear about your experiences, good or bad. Even short, one-or-two sentence responses would be welcome. Of course, more information is better, including the relevant characteristics of your system and application, and a description of any problems encountered. Thanks in advance. Robert Kim Yates rkyates@llnl.gov Computing Research Group Lawrence Livermore National Laboratory Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tim@osiris.usi.utah.edu (Timothy Burns) Subject: FAQ on message passing packages Organization: Utah Supercomputing Institute, University of Utah Unfortunatly, comp.parallel does not have an FAQ on rtfm.mit.edu. I am therefore cluttering the net with my question: Does anyone where I can find the tcgmsg package? I am also looking for a whole list of packages. Thanks, -- Tim Burns email: tim@osiris.usi.utah.edu USI, 85 SSB, Univ. of Utah, UT 84112 phone: (801)581-5172 +--------------------------------------------------------------------------+ | Even the most brilliant scientific discoveries will in time change and | | perhaps grow obsolete, as new scientific manifestations emerge. But Art | | is eternal; for it reveals the inner landscape which is the soul of man. | +---------------------------------- --Martha Graham ---------+ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gbyrd@mcnc.org (Gregory T. Byrd) Subject: Re: __shared on the KSR1 ? Reply-To: gbyrd@mcnc.org (Gregory T. Byrd) Organization: North Carolina Supercomputing Center References: <1993Dec1.163034.16101@hubcap.clemson.edu> First, I don't understand why you would expect to see different behavior from the three cases. One thread is writing to all of the arrays and then lots of threads are reading them. It's correct behavior for the printing threads to read the same values. What were you expecting? Second, I believe you are confusing the notion of the private variable c with the pointer that's stored there. You do not use the variable c in any of the created threads -- instead you pass them the contents of c, which is a pointer. All *data* on the KSR1 is shared (within a process), even though not all *variables* are shared -- a private variable refers to a section of the shared global memory that is set aside for a particular thread. Other threads using that same variable name will refer to other memory locations. If I want to pass the address of my private variable to other threads, that's perfectly fine. So the initial thread declares a private variable c and then passes the contents of that variable, which is a global pointer (there is no other kind), to the other threads. The other threads access the data pointed to by the initial thread's "private" variable c. If you malloc'ed and initialized the arrays with different values in different threads, and then printed them, then you should see some interesting differences. ...Greg Byrd MCNC / Information Technologies Division gbyrd@mcnc.org 3021 Cornwallis Road / P.O. Box 12889 (919)248-1439 Research Triangle Park, NC 27709-2889 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Precedence: first-class From: w.purvis@daresbury.ac.uk (Bill Purvis, ext 3357) Subject: Re: What is a good multicomputer disk block Organization: Daresbury Laboratory, UK References: <1993Dec1.163159.16579@hubcap.clemson.edu> Reply-To: w.purvis@daresbury.ac.uk Nntp-Posting-Host: dlse.dl.ac.uk Dick Wilmot wrote: >I heard that a study at a supercomputer site found that 30% of the files >were write-only (never actually read -- about as cold as you can get). I can believe this, and this is a highly plausible explanation - most supercomputers have a reputation (possible over-rated) of being unreliable. People who invest lots of time/money in large computations learn that keeping checkpoint and log files is a good way of insuring against system crashes. If the program runs to completion you never bother looking at these files - simply delete them. If the system should crash, you would read the log, and you may be able to save a lot of machine time by re-starting the program from a checkpoint. Since most major computations result in a small amount of data, while checkpoint and logs can be enormous, it seems quite reasonable to me. Bill Purvis. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: mmisra@slate.mines.colorado.edu (Manavendra Misra) Subject: Dept Head Position: Colorado School of Mines Message-ID: <1993Dec2.235434.61730@slate.mines.colorado.edu> Sender: mmisra@slate.mines.colorado.edu (Manavendra Misra) Date: Thu, 2 Dec 1993 23:54:34 GMT Organization: Colorado School of Mines >From spruess@slate.Mines.Colorado.EDU Thu Dec 2 16:27 MST 1993 Date: Thu, 2 Dec 1993 16:22:07 -0700 From: spruess@slate.Mines.Colorado.EDU (PRUESS STEVEN A) To: mmisra@choma.Mines.Colorado.EDU Content-Type: text Content-Length: 2948 Status: RO X-Lines: 58 FROM: Steve Pruess (spruess@slate.mines.colorado.edu) DATE: November 12, 1993 SUBJECT: Announcement of Department Head Opening -------------------------------------------------------------------------- Colorado School of Mines The Colorado School of Mines is seeking candidates for the position of Head of the Department of Mathematical and Computer Sciences. This department offers BS, MS, and PhD degrees under the department title. With a faculty of 18 tenured and tenure track members, the department annually receives roughly a million dollars in grants; 116 undergraduate students and 70 graduate students are currently enrolled in our degree programs. The position requires a PhD in a mathematical or computer science. The applicant should have a sufficiently outstanding record of scholarly achievement and teaching experience to justify a tenured appointment at the Full Professor level. In addition, the successful applicant must have held an academic position for at least five years, and show evidence of demonstrable administrative ability, including visionary leadership, communication skills, and effective interaction and evaluation of personnel. The Head is expected to manage and direct the department's efforts in instruction and in scholarship, to continue and enhance its excellence in teaching and in research, to plan and oversee the development of its research activities and academic programs, and to represent the department on campus and externally. The Colorado School of Mines is a state university, internationally renowned in the energy, materials, and resource fields, attracting outstanding students in a broad range of science and engineering disciplines. The School of Mines is strongly committed to quality teaching and research. CSM provides an attractive campus environment, a collegial atmosphere, relatively small size (3000 students, about 30% in graduate programs), and an ideal location in the foothills of the Rocky Mountains 13 miles from downtown Denver. Applications will be considered beginning February 15, 1994 and thereafter until the position is filled. The applicant should provide a statement giving administrative, pedagogical, and scholarly philosophy which should include a discussion of advantages and disadvantages of programs combining both Computer Science and Mathematics, and how to reconcile research vs. teaching conflicts. This letter and a vita should be sent by postal mail to Colorado School of Mines Department Head Search #94-01-31 1500 Illinois Street Golden, CO 80401 The applicant must also arrange for five letters of reference to be mailed to the above address or sent by email to spruess@slate.mines.colorado.edu CSM is an Affirmative Action/Equal Opportunity Employer. Women and minorities are encouraged to apply. ============================================================================ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: arvindk@dogmatix.CS.Berkeley.EDU (Arvind Krishnamurthy) Newsgroups: comp.parallel Subject: Split-C language for the CM5 Organization: University of California, Berkeley This is to announce a Split-C release for the CM5. Here is a short blurb on the language: ------------- Split-C is a parallel extension of the C programming language primarily intended for distributed memory multiprocessors. It is designed around two objectives. The first is to capture certain useful elements of shared memory, message passing, and data parallel programming in a familiar context, while eliminating the primary deficiencies of each paradigm. The second is to provide efficient access to the underlying machine, with no surprises. (This is similar to the original motivation for C---to provide a direct and obvious mapping from high-level programming constructs to low-level machine instructions.) Split-C does not try to obscure the inherent performance characteristics of the machine through sophisticated transformations. This combination of generality and transparency of the language give the algorithm or library designer a concrete optimization target. ------------- The newest version of Split-C (for the CM5) can always be found in ftp.cs.berkeley.edu:ucb/CASTLE/Split-C. The current version is stable (has been in use for months) and easy to install. And if you are just interested in the language, there is also a tutorial and a paper that describes the language in the same directory. Please send bugs/questions/suggestions/feedback to split-c@boing.cs.berkeley.edu. - The Split-C team Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: huesler@ife.ee.ethz.ch (Rene Huesler) Subject: ALLCACHE (KSR1) Hi, I'd like to get some inside information about the ALLCACHE from KSR1. All I have found till now is the technical summary from KSR but the information I got from there is not sufficent for me. I'd like to get information about latency and how the ARD works. Thanks for any inforamtion Rene | Rene Huesler | << NEW PHONE NUMBER >> | | Swiss Federal Institute of Technology | Phone: (-41) 1 632 51 42 | | Electronics Laboratory ETZ H63 | or 632 66 53 | | Gloriastrasse 35 | FAX: (-41) 1 262 16 55 | Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: antonio@cs.tu-berlin.de (Antonio Maria Ferreira Pinto Leite) Subject: American Express CM-5's Date: 3 Dec 1993 08:26:50 GMT Organization: Technical University of Berlin, Germany Nntp-Posting-Host: marple.cs.tu-berlin.de Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit In the last Issue of the CACM W.D. Hillis and L.W. Tucker mention in the article about the CM-5 that American Express is using 2 CM-5 machines to process the customer database. I wonder if someone could send me (or post here naturally) some information on this subject, for instance - configuration - cost - description of the applications Thank you in advance Antonio ------------------------------------------------------------------------------ E-Mail: antonio@cs.tu-berlin.de Snail-Mail: Antonio Pinto Leite Technische Universitaet Berlin Institut fuer Angewandte Informatik Franklinstr. 28/29 Sekr. 5-13 10587 Berlin -- reply to: antonio@cs.tu-berlin.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: udo@uranus.informatik.rwth-aachen.de (Udo Brocker) Subject: Programmiererstelle Date: 3 Dec 1993 10:32:09 GMT Organization: Rechnerbetrieb Informatik - RWTH Aachen Nntp-Posting-Host: uranus.lfbs.rwth-aachen.de Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Programmierer/in gesucht Lehrstuhl fuer Betriebssysteme der RWTH-Aachen Gruppe fuer Betriebssystementwicklungen und betriebssystemnahe Programmierungen ------------------------------------------------------------------------------- Der Lehrstuhl fuer Betriebssysteme beschaeftigt sich mit Entwicklung von Betriebssoftware fuer Parallelrechner. Ausserdem sollen auch Anwendungen auf diesen Maschinen portiert und weiterentwickelt werden. Dafuer werden drei Gruppen aufgebaut. Fuer die Gruppe der Betriebssystementwicklung und der Erstellung von betriebssystemnaher Software suchen wir einen Programmierer oder eine Programmiererin. Die Voraussetzungen fuer die Stelle sind entweder ein Fachhochschulstudium der Informatik oder eine gleichwertige Ausbildung. In Aachen ist das z.B. die Ausbildung zum mathematisch technischen Assistenten/in. Es sollten sehr gute Programmierkenntnisse in der Programmiersprache C vorhanden sein. Der intensive Umgang und Gebrauch des Betriebssystems UNIX ist ebenfalls Voraussetzung. Wuenschenswert waeren Erfahrungen in der Betriebssystemprogrammierung (z.B. Geraetetreiber). Die Verguetung erfolgt nach BAT IVb. Die gesetzlichen und sozialen Regelungen sind im BAT des oeffentlichen Dienstes festgelegt. Interessierte Bewerber melden sich bitte mit einem kurzen Lebenslauf bei: Udo Brocker e-mail: udo@lfbs.rwth-aachen.de Tel.: +49-241-80-7635 ---------- | _ Udo Brocker, Lehrstuhl fuer Betriebssysteme, RWTH Aachen |_|_`__ Kopernikusstr. 16, D-52056 Aachen, | |__) Tel.: +49-241-807635; Fax: +49-241-806346 |__) email: udo@lfbs.rwth-aachen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bast@ph-cip.uni-koeln.de (Barthel Steckemetz) Subject: Re: looking for a Travelling Salesman code for CM Organization: CipLab - Institutes of Physics, University of Cologne, Germany I am working on the travelling salesman problem on a Parsytec 1024 transputer Parallel Computer. I dont now wether this helps you. Please mail me if you want. Barthel Steckemetz bs@parix2.mi.uni-koeln.de To: comp-parallel@uunet.UU.NET Date: Thu, 2 Dec 1993 17:33:08 +0100 From: NEWS Manager Sender: news@cscs.ch Newsgroups: comp.sys.super,news.announce.conferences,ch.general Path: usenet From: rehmann@cscs.ch (Rene M. Rehmann) Subject: EXTENDED DEADLINE: IFIP WG10.3 conference Message-ID: <1993Dec2.163300.17315@cscs.ch> Keywords: massive parallelism, programming, tools, working conference, CFP Sender: usenet@cscs.ch (NEWS Manager) Nntp-Posting-Host: vevey.cscs.ch Reply-To: rehmann@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico (CSCS), Manno, Switzerland Date: Thu, 2 Dec 1993 16:33:00 GMT CALL FOR PAPERS EXTENDED DEADLINE IFIP WG10.3 WORKING CONFERENCE ON PROGRAMMING ENVIRONMENTS FOR MASSIVELY PARALLEL DISTRIBUTED SYSTEMS April 25 - 30, 1994 Monte Verita, Ascona, Switzerland Massively parallel systems with distributed resources will play a very important role for the future of high performance computing. One of the current obstacles of these systems is their difficult programming. The proposed conference will bring together active researchers who are working on ways how to help programmers to exploit the performance potential of massively parallel systems. The working conference will consist of sessions for full and short papers, interleaved with poster and demonstration sessions. The Conference will be held April 25 - 30, 1994 at the Centro Stefano Franscini, located in the hills above Ascona at Lago Maggiore, in the southern part of Switzerland. It is organized by the Swiss Scientific Computing Center CSCS ETH Zurich. The conference is the forthcoming event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) on Programming Environments for Parallel Computing. The conference succeeds the 1992 Edinburgh conference on Programming Environments for Parallel Computing. SUBMISSION OF PAPERS Submission of papers is invited in the following areas: -- Programming models for parallel distributed computing -- Computational models for parallel distributed computing -- Program transformation tools -- Concepts and tools for the design of parallel distributed algorithms -- Reusability in parallel distributed programming -- Concepts and tools for debugging massively parallel systems (100+ processing nodes) -- Concepts and tools for performance monitoring of massively parallel systems (100+ processing nodes) -- Tools for application development on massively parallel systems -- Support for computational scientists: what do they really need ? -- Application libraries (e.g., BLAS, etc.) for parallel distributed systems: what do they really offer ? -- Problem solving environments for parallel distributed programming Authors are invited to submit complete, original, papers reflecting their current research results. All submitted papers will be refereed for quality and originality. The program committee reserves the right to accept a submission as a long, short, or poster presentation paper. Manuscripts should be double spaced, should include an abstract, and should be limited to 5000 words (20 double spaced pages); The contact authors are requested to list e-mail addresses if available. Fax or electronic submissions will not be considered. Please submit 5 copies of the complete paper to the following address: PD Dr. Karsten M. Decker IFIP 94 CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland IMPORTANT DATES Deadline for submission: December 17, 1993 Notification of acceptance: February 15, 1994 Final versions: March 15, 1994 CONFERENCE CHAIR Karsten M. Decker CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8233 fax: +41 (91) 50 6711 e-mail: decker@serd.cscs.ch ORGANIZATION COMMITTEE CHAIR Rene M. Rehmann CSCS-ETH Zurich Via Cantonale CH-6928 Manno Switzerland phone: +41 (91) 50 8234 fax: +41 (91) 50 6711 e-mail: rehmann@serd.cscs.ch PROGRAM COMMITTEE Francoise Andre, IRISA, France Thomas Bemmerl, Intel Corporation, Germany Arndt Bode, Technical University Muenchen, Germany Helmar Burkhart, University Basel, Switzerland Lyndon J. Clarke, University of Edinburgh, UK Michel Cosnard, Ecole Normale Superieure de Lyon, France Karsten M. Decker, CSCS-ETH Zurich, Switzerland Thomas Fahringer, University of Vienna, Austria Claude Girault, University P.et M. Curie Paris, France Anthony J. G. Hey, University of Southhampton, UK Roland N. Ibbett, University of Edinburgh, UK Nobuhiko Koike, NEC Corporation, Japan Peter B. Ladkin, University of Stirling, UK Juerg Nievergelt, ETH Zurich, Switzerland Edwin Paalvast, TNO-TPD, The Netherlands Gerard Reijns, Delft University of Technology, The Netherlands Eugen Schenfeld, NEC Research Institute, USA Clemens-August Thole, GMD, Germany Owen Thomas, Meiko, UK Marco Vanneschi, University of Pisa, Italy Francis Wray, Cambridge, UK MONTE VERITA, ASCONA, SWITZERLAND Centro Stefano Franscini, Monte Verita, located in the scenic hills above Ascona, with a beautiful view on Lago Maggiore, has excellent conference and housing facilities for about sixty participants. Monte Verita enjoys a sub-alpine/mediterranean climate with mean temperatures between 15 and 18 C in April. The closest airport to Centro Stefano Franscini is Lugano-Agno which is connected to Zurich, Geneva and Basle and many other cities in Europe by air. Centro Stefano Franscini can also be reached conveniently by train from any of the three major airports in Switzerland to Locarno by a few hours scenic trans-alpine train ride. It can also be reached from Milano in less than three hours. For more information, send email to ifip94@cscs.ch Karsten M. Decker and Rene M. Rehmann --- Rene M. Rehmann phone: +41 (91) 50 8234 Section of Research and Development (SeRD) fax : +41 (91) 50 6711 Swiss Scientific Computing Center CSCS email: rehmann@cscs.ch Via Cantonale, CH-6928 Manno, Switzerland To: comp-parallel@uunet.UU.NET Path: rafael.llnl.gov!yates From: yates@rafael.llnl.gov (Kim Yates) Newsgroups: comp.parallel,comp.arch.storage,comp.sys.super,sci.physics.research,sci.chem,sci.bio Subject: I/O on parallel machines Date: 2 Dec 1993 18:01:56 GMT Organization: Lawrence Livermore National Laboratory Nntp-Posting-Host: rafael.llnl.gov Are you satisfied with the performance and usability of I/O on parallel computers? I'm a researcher at Livermore rather loosely affiliated with the National Storage Laboratory here, and I'm trying to evaluate the current state of the art. I'm interested in all aspects of parallel I/O: hardware, software, performance, programmability, portability, you name it! If you have an I/O intensive application running on any sort of parallel computer, I'd like to hear about your experiences, good or bad. Even short, one-or-two sentence responses would be welcome. Of course, more information is better, including the relevant characteristics of your system and application, and a description of any problems encountered. Thanks in advance. Robert Kim Yates rkyates@llnl.gov Computing Research Group L-306 Lawrence Livermore National Laboratory P.O. Box 808 Livermore, CA 94550 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: yates@rafael.llnl.gov (Kim Yates) Subject: I/O on parallel machines Sender: fpst@hubcap.clemson.edu (Steve Stevenson) Nntp-Posting-Host: rafael.llnl.gov Organization: Lawrence Livermore National Laboratory Date: 2 Dec 1993 18:01:56 GMT Are you satisfied with the performance and usability of I/O on parallel computers? I'm a researcher at Livermore rather loosely affiliated with the National Storage Laboratory here, and I'm trying to evaluate the current state of the art. I'm interested in all aspects of parallel I/O: hardware, software, performance, programmability, portability, you name it! If you have an I/O intensive application running on any sort of parallel computer, I'd like to hear about your experiences, good or bad. Even short, one-or-two sentence responses would be welcome. Of course, more information is better, including the relevant characteristics of your system and application, and a description of any problems encountered. Thanks in advance. Robert Kim Yates rkyates@llnl.gov Computing Research Group L-306 Lawrence Livermore National Laboratory P.O. Box 808 Livermore, CA 94550 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: vishkin@umiacs.umd.edu (Uzi Vishkin) Newsgroups: comp.theory,comp.parallel Subject: Post doc position Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 A POST-DOC Position Applications are invited for an anticipated post-doctoral position in the area of discrete algorithms at the University of Maryland Institute for Advanced Computer Studies (UMIACS). A Ph.D in Computer Science or a related area is required. Please send a letter of application, and a resume to Prof. Larry S. Davis Director, UMIACS, A.V. Williams Bldg., College Park, MD 20742-3251. Attach to your letter a short description (2-3 pages) of your research program; please try to respond to the following questions in the program: (1) What kind of research problems (or themes) are you planning to work on? (2) Why are these problems interesting? (3) Will your research plans be enhanced by experimental work? (A well-articulated positive or negative answer will be appropriate here.) Also, have 3-5 letters of recommendation forwarded directly to Prof. Davis. The University of Maryland Institute for Advanced Computer Studies (UMIACS) was established in 1985 on the College Park Campus as an independent state-funded interdisciplinary computer research unit. Research is being conducted in the areas of Artificial Intelligence, Computation Theory, Computer Systems, Design and Analysis of Algorithms, Database Systems, Fault Tolerance, Numerical Analysis, Parallel Processing, Performance Evaluation and Software Engineering. EOAA employer. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hra@pollux.cs.uga.edu (Hamid R. Arabnia) Subject: CFP - Transputer Research And Applications (NATUG7) Date: 3 Dec 1993 15:58:07 GMT Organization: University of Georgia, Athens Nntp-Posting-Host: pollux.cs.uga.edu ************************************************************************ CALL FOR PAPERS 1994 Transputer Research And Applications Conference Sponsored by North American Transputer Users Group October 23-25, 1994 The 1994 Transputer Research And Applications Conference (7th in the series) will be held in Athens, Georgia (USA), on October 23 through 25. The first day is reserved for tutorial sessions. Contributions are being solicited in the areas of hardware, software, and applications of processors that may be used as basic building blocks for multicomputer systems such as SGS-INMOS T9000, T8xx, T4xx, the Texas Instruments C40, the DEC Alpha AXP, and the Intel i860 and iWarp. Topics of interest include all aspects of the following: parallelism as a concept (models), parallel algorithms, parallel machine architectures, interconnection networks, software tools, operating systems for multiprocessors, mapping algorithms into multiprocessors, scalable systems (algorithms and hardware), scalability of crossbar switches, and applications (such as: numerical, imaging, vision, GIS, ...). Presentations will be allotted 20 minutes each, with ample additional time for questions. Additional papers may be presented at poster sessions if there is sufficient interest. All accepted papers will appear in a published proceedings. Abstracts of 1000 - 1500 words must be received by May 16, 1994. Notices of acceptance will be sent out by June 15, 1994 and final drafts of accepted papers will be due by July 15, 1994. Authors should indicate on the abstract whether they would be willing to present the paper in a poster session. Please include mail address, electronic mail address, telephone and FAX numbers of the presenter. Abstracts should be submitted to: Prof. Hamid R. Arabnia University of Georgia Department of Computer Science 415 Graduate Studies Research Center Athens, Georgia 30602, USA Tel: (706) 542-3480 Fax: (706) 542-2966 email: hra@cs.uga.edu Email submissions of abstracts in ascii form are preferred. ************************************************************************ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: anna@deis35.cineca.it (Anna Ciampolini) Subject: ECOOP94 Call for Demo Organization: DEIS ... le news Date: Fri, 3 Dec 93 15:36:07 GMT Apparently-To: comp-parallel@uunet.uu.net Call for Demonstration Proposals - ECOOP '94 The Eighth European Conference on Object-Oriented Programming Bologna, Italy July 6-8, 1994 The 1994 Eighth European Conference on Object-Oriented Programming will be held on July 6-8, 1994, in Bologna, Italy. The Conference aims to bring together researchers and practitioners from academia and industry to discuss and exchange new developments in object-oriented languages, systems and methods. A Demonstration session is planned in parallel with the Conference sessions. Demonstrations of object-oriented software are invited to illustrate innovative ideas. Candidate demos should: * illustrate innovative object-oriented concepts; * use advanced technologies; * present non-commercial products (an exhibition session for commercial object-oriented software is also planned). Proposals for demonstrations should be approximately three pages in length, and should contain: * a description of the demo, identifying the specific technical issues that will be addressed; * a discussion of the relevance of the demo for the object-oriented programming community; * the hardware/software requirements for the demonstration. Acceptance of demos will be decided on the basis of their technical merit, scientific relevance and novelty; it will be also constrained by the organization capability to furnish the required hardware/software. Proposals must be submitted no later than April 1, 1994 to: Anna Ciampolini ECOOP'94 Demonstration Chair DEIS - Universita' di Bologna Viale Risorgimento 2 I-40136 Bologna, Italy Tel.: +39 51 6443033 Fax : +39 51 6443073 E-mail: anna@deis33.cineca.it Acceptance will be notified no later than May 16, 1994. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sscott@mcs.kent.edu (Stephen Scott) Subject: Wanted: SC`93 paper & abstract list Date: 3 Dec 1993 16:23:00 GMT Organization: Kent State University Nntp-Posting-Host: mcs.kent.edu Does anyone have a list of papers & abstracts from SuperComputer `93 that they could forward? I would also like the same information from the "Parallel Computation on a Network of Computers -- A Minisymposium during Supercomputing'93." thanks, stephen sscott@mcs.kent.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: convex.list.info-clusters,comp.parallel From: suplick@convex.com (Jean Suplick) Subject: Load Balancer: what version? Sender: usenet@convex.com (news access account) Nntp-Posting-Host: mikey.convex.com Organization: CONVEX Computer Corporation, Richardson, Tx., USA Date: Fri, 3 Dec 1993 16:57:13 GMT X-Disclaimer: This message was written by a user at CONVEX Computer Corp. The opinions expressed are those of the user and not necessarily those of CONVEX. Apparently-To: hypercube@hubcap.clemson.edu Can someone tell me what the latest version of Load Balancer is? My literature is from 1992 and I fear it is out of date? I'm most interested in understanding what new features might have been added in a later release. Thanks, Jean suplick@convex.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Fri, 3 Dec 93 13:39:59 -0700 From: mdb528@michigan.et.byu.edu (Matthew D. Bennett) Organization: Brigham Young University, Provo UT USA Subject: Distributed Computing Environments Keywords: distributed computing A fellow student and myself are about to embark on a development effort to build a preprossesor to help develop distributed programs. If there is something already out there, I would be very appreciative on some information. If there is nothing out there, but you are interested in our project, please send me some mail and I will forward information on the preproccesor and how to get a copy of it. Matthew D. Bennett -- ----------------------------------------------------------------------------- If you have to stop to catch your breath, you're not skiing hard enough! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stew@sep.Stanford.EDU (Stewart Levin) Newsgroups: comp.sys.super,comp.parallel,su.computers Subject: Stanford Winter Quarter course announcement Date: 3 Dec 1993 20:48:44 GMT Organization: Stanford Exploration Project Nntp-Posting-Host: oas.stanford.edu Keywords: high performance computing, supercomputing (As this course will be broadcast via the Stanford Instructional Television Network, I am cross-posting outside of Stanford, too.) Title: High Performance Computing Units: 2 (1 without lab) Qtr: Winter 1993-94 Text: High Performance Computing, Kevin Dowd, O'Reilly & Associates Course No: SCCM 220-240-0-01 Abstract: This course covers effective techniques for mapping numerically intensive algorithms to modern high performance computer platforms such as the Cray C-90 or the IBM RS6000. This includes understanding hardware parallelism and software pipelining, compiler directives and limitations, source code transformations to enhance register and memory usage, and when and how to delve into assembly language for maximum performance. Hands-on exercises will develop skills in applying general principles to a variety of architectures, including RISC, vector, and VLIW machines. Prerequisites: Familiarity with Fortran and C are highly recommended. Prior assembly language experience helpful, but not required. Maximum enrollment: 24 for labs (2 units), lectures only beyond that Labs: Much of the course work will consist of computer labs requiring implementation of carefully selected algorithms on various architectures. Topics by week: Week 1: Overview of course goals and timetable Architectural components Functional units Iterations and looping Software pipelining Lab 1: Complex vector multiplication benchmarking (introduction to computers used in the course.) Week 2: Operation counting and performance prediction VLIW programming paradigm - horizontal microcode Lab 2: Complex vector multiplication in horizontal microcode (i860 programming) Week 3: Memory hierarchies % Registers - scalar and vector % Memory banks % Cache % Paging Algorithm and data layout considerations Lab 3: Matrix-vector multiplication - blas primitives & block methods Week 4: Optimizing compilers % Dependency analysis % Instruction reordering % Loop unrolling % Loop interchange % Strength reduction % Hoisting and sinking Compiler directives Lab 4: Tridiagonal equations: overcoming recursion dependencies Week 5: Vectorization SIMD parallelism Strip mining Vectorizing conditional calculations Indirect addressing Lab 5: Vectorizing Quicksort Week 6: RISC architectures Hardware optimization support RISC compilers Superscalar designs Lab 6: Matrix-matrix multiplication Week 7: Massively parallel computing Data parallel model Alphabet soups Performance analysis Load balancing Lab 7: On-processor optimization: Complex multiplication (CM-5) Week 8: Performance versus Portability When is assembly programming indicated? Variant or alternative algorithms. Lab 8: FFT's Week 9: I/O performance Blocking Sequential and random access RAID systems Out-of-core calculations Lab 9: 3D finite-differences Week 10: Future trends in High Performance Computing Architectures Languages and compilers Network computing Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: leech@cs.unc.edu (Jon Leech) Subject: Re: SIMD/MSIMD for 3D graphics Date: 3 Dec 1993 22:25:06 -0500 Organization: The University of North Carolina at Chapel Hill References: In article Zahid.Hussain@brunel.ac.uk (Zahid Hussain) writes: >Does anyone have any reference as to how SIMD/MSIMD parallel processors >are being used for 3D graphics? In which directions are the research >being directed? Rendering is the direction with the most experience and examples, probably because rasterization is inherently highly parallel. Our group has been building massively parallel (tens to hundreds of thousands of PEs) SIMD rasterization processors since the mid-80s. See the paper by Molnar et al. in the SIGGRAPH '92 proceedings on PixelFlow for the latest machine; earlier papers from the group focus more on the SIMD aspect. SGI's RealityEngine architecture uses more moderate levels of parallelism (320 PEs) in their rasterizers (see *their* SIGGRAPH '92 paper), and other commercial designs are heading this way. As far as other types of rendering, people have done ray-tracers on Connection Machines and the like, though I don't have references off the top of my head for those. The geometric transformation stage of the graphics pipeline can benefit from moderate levels of parallelism, and some graphics systems use a SIMD architecture for this (synchronized MIMD PEs using existing commercial CPUs or ASICs are more common). Usually this is just 4-8 PEs, with high FP performance. Modelling can sometimes make good use of SIMD architectures, as in Karl Sims' SIGGRAPH '89 (or maybe '90) paper on particle systems (in some sense, any dynamics model could be used for 3D graphics, but this is probably not what you're after). Note that much of the work I mention is done with special-purpose SIMD systems, rather than a CM-2 or MasPar. These specialized systems usually have little or no communications among the PEs, and are tightly coupled to other components of a graphics system, such as the frame buffer. Of course you can implement the same algorithms on more general architectures, with considerably less performance for the same amount of silicon. I'd be more specific with citations, but I'm at home and my conference proceedings aren't. Jon Leech UNC Pixel-Planes Project __@/ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - What Is It? Organization: NETCOM On-line Communication Services (408 241-9760 guest) The PARALLEL Processing Connection is an entrepreneurial association; we mean to assist our members in spawning very successful new businesses involving parallel processing. Our meetings take place on the second Monday of each month at 7:15 PM at Sun Microsystems at 901 South San Antonio Road in Palo Alto, California. Southbound travelers exit 101 at San Antonio ; northbound attendees also exit at San Antonio and take the overpass to the other side of 101. There is an $8 visitor fee for non- members and members ($40 per year) are admitted free. Our phone number is (408) 732-9869 for a recorded message about upcoming meetings. Since the PPC was formed in late 1989 many people have sampled it, found it to be very valuable, and even understand what we're up to. Nonetheless, certain questions persist. Now, as we approach our fifth year of operation, perhaps we can and should clarify some of the issues. For example: Q. What is PPC's raison d'etre? A. The PARALLEL Processing Connection is an entrepreneurial organization intent on facilitating the emergence of new businesses. PPC does not become an active member of any such new entities, ie: is not itself a profit center. Q. The issue of 'why' is perhaps the most perplexing. After all, a $40 annual membership fee is essentially free and how can anything be free in 1993? What's the payoff? For whom? A. That's actually the easiest question of all. Those of us who are active members hope to be a part of new companies that get spun off; the payoff is for all of us -- this is an easy win-win! Since nothing else exists to facilitate hands-on entrepreneurship, we decided to put it together ourselves. Q. How can PPC assist its members? A. PPC is a large technically credible organization. We have close to 100 paid members and a large group of less regular visitors; we mail to approximately 500 engineers and scientists (primarily in Silicon Valley). Major companies need to maintain visibility in the community and connection with it; that makes us an important conduit. PPC's strategy is to trade on that value by collaborating with important companies for the benefit of its members. Thus, as an organization, we have been able to obtain donated hardware, software, and training and we've put together a small development lab for hands-on use of members at our Sunnyvale office. Further, we've been able to negotiate discounts on seminars and hardware/software purchases by members. Most important, alliances such as we described give us an inside opportunity to JOINT VENTURE SITUATIONS. Q. As an attendee, what should I do to enhance my opportunities? A. Participate, participate, participate. Many important industry principals and capital people are in our audience looking for the 'movers'! For further information contact: -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - December Meeting Notice Organization: NETCOM On-line Communication Services (408 241-9760 guest) Date: Sat, 4 Dec 1993 22:54:20 GMT Apparently-To: comp-parallel@uunet.uu.net December 13th - Measuring & Predicting Performance With the AIMS Software Toolset from NASA Ames Research Center Designing the "next parallel programming paradigm" (such as, maybe workstation clusters linked with SCI as a Distributed Shared Memory "platform")? Jerry Yan of NASA Ames will describe an invaluable software tool-set being put together which enables parallel program execution to be captured and displayed (and simulated?). The Automated Instrumentation and Monitoring System (spelled AIMS) inserts active event recorders into program source code before compilation; collects performance data at run- time; and affords visualization of program execution based on the data collected. AIMS 2.2 runs on the iPSC/860, iPSC/Delta and Paragon. Prototypes for the CM-5 and work station clusters running PVM have already been developed. Performance tuning will be illustrated using a simple FORTRAN implemenation of a NASA Parallel Benchmark called IS (Integer Sort). A discussion of member entrepreneurial projects currently underway will begin at 7:15PM and the main meeting will start promptly at 7:45PM at Sun Microsystems at 901 San Antonio Road in Palo Alto. This is just off the southbound San Antonio exit of 101. Northbound travelers also exit at San Antonio and take the overpass to the other side of 101. Please be prompt; as usual, we expect a large attendance; don't be left out or left standing. There is a $8 fee for non-members and members will be admitted free. -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: thibaud@kether.cgd.ucar.edu (Francois P. Thibaud) Subject: First CM International Users Group Meeting in Santa Fe Keywords: TMC, CM-2, CM-200, CM-5, user group Sender: news@ncar.ucar.edu (USENET Maintenance) Organization: Climate and Global Dynamics Division/NCAR, Boulder, CO Date: Sun, 5 Dec 1993 02:34:57 GMT Apparently-To: comp-parallel@ncar The following is a forwarded message from "Stephen C. Pope" sent to "cm5-users-list@acl.lanl.gov". Hope to see you in Santa Fe, New Mexico ! Francois P. Thibaud Organization: University of Maryland at College Park (UMCP) and The National Center for Atmospheric Research (NCAR) Address: 1850, Table Mesa Drive; PO Box 3000; Boulder CO 80307-3000 USA Phone: (+1)303-497-1707; Fax: (+1)303-497-1700 Internet: thibaud@ncar.ucar.edu (thibaud@ra.cgd.ucar.edu) ------------------------------------------------------------------------ MARK THIS ON YOUR CALENDARS!!! ***************PRELIMINARY ANNOUNCEMENT*************** The first CM International User Group meeting will be held in Santa Fe, NM, from February 16-18, 1994. Los Alamos National Laboratory, in conjunction with TMC, is organizing this first-ever meeting of the newly-formed CM User Group. The agenda is now being planned and will cover talks and working groups on a selected set of topics important to your needs. The meeting will be held at the Eldorado Hotel and has been carefully scheduled so that attendees may tack on a weekend of (hopefully) great skiing at any of the nearby ski areas. The hotel conference rate will apply February 19-20 for anyone wishing to take advantage of this. If you are possibly interested in attending, respond to: cm-users@acl.lanl.gov with your name, email address, street address, phone and FAX numbers, and general applications you use. We will need your preliminary RSVP and address information by DECEMBER 13 so we can send you the registration materials and the meeting agenda by DECEMBER 20. CUTOFF FOR HOUSING WILL BE JANUARY 7, 1994. If you have questions, you may respond to the above address or call Erma Pearson at 505/665-4530. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: teoym@iscs.nus.sg Subject: IEEE TENCON '94 Special Session Organization: Dept of Info Sys and Comp Sci, National University of Singapore, SINGAPORE Reply-To: teoym@iscs.nus.sg FINAL CALL FOR PAPERS IEEE TENCON '94 22 - 26 August 1994, SINGAPORE Special Session on PARALLEL PROCESSING TECHNOLOGY & APPLICATIONS -------------------------------------------------------------------------- The one-day special session will provide a forum for presentation and exchange of current work on all areas of parallel processing technology and its applications. Topics of interest include, but are not limited to, the following areas: o parallel architectures o parallel languages and algorithms o parallelising compilers and programming environments o performance modelling/evaluation o applications on parallel systems Authors are invited to submit complete, original, and previously unpublished papers reflecting their current research results. All submitted papers will be refereed. Accepted papers will be published in the IEEE TENCON proceedings. Manuscripts should not exceed >> 20 pages << in length (double spaced) including an abstract, all text, figures, tables and references. The author's names, affiliation, complete address of the corresponding author, email address, and telephone and fax numbers should be on the cover page. Please submit five copies of the complete paper to: Dr Teo Yong Meng Department of Information Systems & Computer Science National University of Singapore Kent Ridge, Singapore 0511 tel : (65) 772 2830, fax : (65) 779 4580 email : teoym@iscs.nus.sg Important Dates 1 January 1994 - papers due 15 March 1994 - notice of acceptance 1 June 1994 - camera-ready paper due Session Chairs: Y.M. Teo, National University of Singapore W.N. Chin, National University of Singapore Session Program Committee: M.N. Chong, Nanyang Technological University, SINGAPORE J.R. Gurd, University of Manchester, UNITED KINGDOM P.H.J. Kelly, Imperial College, UNITED KINGDOM K. Murakami, Kyushu University, JAPAN L.M. Patnaik, Indian Institute of Science, INDIA R. Raghavan, Institute of Systems Science, SINGAPORE A.L. Wendelborn, University of Adelaide, AUSTRALIA C.K. Yuen, National University of Singapore, SINGAPORE Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: teoym@iscs.nus.sg Subject: IEEE TENCON '94 Special Session Organization: Dept of Info Sys and Comp Sci, National University of Singapore, SINGAPORE Reply-To: teoym@iscs.nus.sg FINAL CALL FOR PAPERS IEEE TENCON '94 22 - 26 August 1994, SINGAPORE Special Session on PARALLEL PROCESSING TECHNOLOGY & APPLICATIONS -------------------------------------------------------------------------- The one-day special session will provide a forum for presentation and exchange of current work on all areas of parallel processing technology and its applications. Topics of interest include, but are not limited to, the following areas: o parallel architectures o parallel languages and algorithms o parallelising compilers and programming environments o performance modelling/evaluation o applications on parallel systems Authors are invited to submit complete, original, and previously unpublished papers reflecting their current research results. All submitted papers will be refereed. Accepted papers will be published in the IEEE TENCON proceedings. Manuscripts should not exceed >> 20 pages << in length (double spaced) including an abstract, all text, figures, tables and references. The author's names, affiliation, complete address of the corresponding author, email address, and telephone and fax numbers should be on the cover page. Please submit five copies of the complete paper to: Dr Teo Yong Meng Department of Information Systems & Computer Science National University of Singapore Kent Ridge, Singapore 0511 tel : (65) 772 2830, fax : (65) 779 4580 email : teoym@iscs.nus.sg Important Dates 1 January 1994 - papers due 15 March 1994 - notice of acceptance 1 June 1994 - camera-ready paper due Session Chairs: Y.M. Teo, National University of Singapore W.N. Chin, National University of Singapore Session Program Committee: M.N. Chong, Nanyang Technological University, SINGAPORE J.R. Gurd, University of Manchester, UNITED KINGDOM P.H.J. Kelly, Imperial College, UNITED KINGDOM K. Murakami, Kyushu University, JAPAN L.M. Patnaik, Indian Institute of Science, INDIA R. Raghavan, Institute of Systems Science, SINGAPORE A.L. Wendelborn, University of Adelaide, AUSTRALIA C.K. Yuen, National University of Singapore, SINGAPORE Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jwkhong@csd.uwo.ca (James W. Hong) Subject: ICCI'94 CFP - 2nd posting Organization: Dept. of Computer Science, University of Western Ontario Date: Sun, 5 Dec 1993 17:29:35 GMT Message-ID: <1993Dec5.172935.19704@julian.uwo.ca> Sender: news@julian.uwo.ca (USENET News System) Nntp-Posting-Host: mccarthy.csd.uwo.ca CALL FOR PAPERS ********************* ICCI'94 6th INTERNATIONAL CONFERENCE ON COMPUTING AND INFORMATION May 26 - 28 , 1994 Trent University Peterborough, Ontario Canada ***************************************************************************** Keynote Address: "Text Databases" Professor Frank Tompa Director, Centre for the New Oxford English Dictionary and Text Research University of Waterloo, Canada ***************************************************************************** Steering Committee: Waldemar W. Koczkodaj, Laurentian University, Canada (Chair) S.K. Micheal Wong, University of Regina, Canada R.C. Kick, Technological University of Tennessee, USA ***************************************************************************** General Chair: Pradip Srimani, Colorado State University, USA Organizing Committee Chair: Richard Hurley, Trent University, Canada Program Committee Chair: David Krumme, Tufts University, USA Public Relations Chair: James Hong, University of Western Ontario, Canada ****************************************************************************** ICCI'94 will be an international forum for presentation of new results in research, development, and applications in computing and information. The organizers expect both practitioners and theorists to attend. The conference will be organized in 5 streams: Stream A: Data Theory and Logic, Information and Coding Theory Theory of Programming, Algorithms, Theory of Computation Stream B: Distributed Computing and Communication Stream C: Concurrency and Parallelism Stream D: AI Methodologies, Expert Systems, Knowledge Engineering, and Machine Learning Stream E: Software and Data Engineering, CASE Methodology, Database, Information Technology Authors are invited to submit five copies of their manuscript to the appropriate Stream Chair by the submission deadline. Papers should be written in English, and contain a maximum of 5000 words. Each paper should include a short abstract and a list of keywords indicating subject classification. Please note that a blind review process will be used to evaluate submitted papers. Authors' names and institutions should be identified only on a cover page that can be detached. No information that clearly identifies the authorship of the paper should be included in the body. Authors of accepted papers will be asked to prepare the final version according to the publisher's requirements. It is expected this year's proceedings will again be published by IEEE Computer Society Press or will make the premier issue of a new CD-ROM journal Journal of Computing and Information. Stream Chairs: ************* STREAM A: ======== Si-Qing Zheng Louisiana State University, USA Email: zheng@bit.csc.lsu.edu Fax: (504) 388-1465 Email Contact: Anil Shende Dickinson College, USA Fax: (717) 245-1690 email: shende@dickinson.edu STREAM B: ======== H. Douglas Dykeman, IBM Zurich Research Lab, Switzerland Email: ddy@zurich.ibm.com Fax: 41-1-710-3608 Email Contact: Bart Domzy Trent University, Canada Fax: (705) 748-1625 email: csbcd@blaze.trentu.ca STREAM C: ======== Eric E. Johnson New Mexico State University, USA Email: ejohnson@nmsu.edu Fax: (505) 646-1435 Email Contact: Reda Ammar University of Connecticut, USA Fax: (203) 486-4817 email: reda@cse.uconn.edu STREAM D: ======== Maria E. Orlowska The University of Queensland, Australia Email: maria@cs.uq.oz.au Fax: 61-7-365 1999 Email Contact: Mike Herman Laurentian University, Canada Fax: (705) 673-6591 email: mwherman@nickel.laurentian.ca STREAM E: ======== Shing-Tsaan Huang National Tsing-Hua University, Taiwan Email: sthuang@nthu.edu.tw Fax: 886-35-723694 Email Contact: Ken Barker University of Calgary, Canada Fax: (403) 284-4707 email: barkerk@cpsc.ucalgary.ca Program Committee: ================= Chair: David Krumme, Tufts University, USA J. Abello, Texas A&M U., USA O. Abou-Rabia, Laurentian U., Canada K. Abrahamson, E. Carolina U., USA M. Aoyama, Fujitsu Limited, Japan L.G. Birta, U. Ottawa, CANADA J.P. Black, U. Waterloo, Canada D.L. Carver, Louisiana State U., USA C.-C. Chan, U. Akron, USA S. Chen, U. Illinois, Chicago, USA V. Dahl, Simon Fraser U., Canada S.K. Das, U. North Texas, USA A.K. Datta, U. Nevada, Las Vegas, USA W.A. Doeringer, IBM Res. Lab., Zurich, Switzerland D.-Z. Du, U. Minnesota, USA E. Eberbach, Acadia University, Canada A.A. El-Amawy, Louisiana State U., USA D.W. Embley, Brigham Young U., USA W.W. Everett, AT&T Bell Labs., USA A. Ferreira, CNRS-LIP, France I. Guessarian, Paris 6 U., France J. Harms, U. Alberta, Canada S.Y. Itoga, U. Hawaii, USA J.W. Jury, Trent U., Canada M. Kaiserswerth, IBM Res. Lab., Zurich, Switzerland M. Li, U. Waterloo, Canada M.K. Neville, Northern Arizona U., USA P. Nijkamp, Free U. Amsterdam, The Netherlands K. Psarris, Ohio U., USA P.P. Shenoy, U. Kansas, USA G. Sindre, Norwegian Inst. Technology, Norway R. Slowinski, Technical U. Poznan, Poland M.A. Suchenek, Cal. State U., Dominguez Hills, USA V. Sunderam, Emory U., USA R.W. Swiniarski, San Diego State U., USA A.M. Tjoa, U. Vienna, Austria R. Topor, Griffith U., Australia A.A. Toptsis, York U., Canada C. Tsatsoulis, U. Kansas, USA W.D. Wasson, U. New Brunswick, Canada L. Webster, NASA/Johnson Space Center, USA E.A. Yfantis, U. Nevada, Las Vegas, USA Y. Zhang, U. Queensland, Australia =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= DEADLINES Jan. 17, 1994 (Mon) Paper submission deadline to the appropriate Stream Chair Mar. 15, 1994 (Tue) Email Notification of acceptance May 26, 1994 (At the conf.) final version due ========================================================================= For further information, please contact: Richard Hurley Organizing Committee Chairman Computer Studies Program Trent University Peterborough, ON, Canada K9J 7B8 Phone: (705) 748-1542 Fax: (705) 748-1625 Email: icci@flame1.trentu.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: kgatlin@galaxy.csc.calpoly.edu (Kang Su Gatlin) Subject: Good Schools in Parallel Computing Date: Sun, 05 Dec 93 19:00:31 GMT Organization: Computer Science Department, Cal Poly SLO Nntp-Posting-Host: galaxy.csc.calpoly.edu Apparently-To: comp-parallel@ucsd.edu [Check out the Schools listings in Parlib How to Get Information from Parlib: The parlib login on hubcap.clemson.edu is a mail server using the netlib software. To get the instructions, send the following mail message: shell> mail parlib@hubcap.clemson.edu Subject: send index . shell> .... ] I am writing in the hopes that someone here may have opinions on the top grad schools in the US in the field of parallel algorithms. Is there a certain school that your company recruits from or has a good reputation? Or does anybody know how these four schools stack up in this area: UCSB UCSD UCLA Stanford Thanks for the advice, I'm using it to help myself decide where to further my education. Kang Su Gatlin Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.arch From: hus@msuvx1.memst.edu Subject: Connect mouse to parallel? Message-ID: <1993Dec5.215447.12636@msuvx1.memst.edu> Date: 5 Dec 93 21:54:47 -0500 Organization: Memphis State University Can I connect my mouse to the parallel port? How? Thanks. hus Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: djb1@ukc.ac.uk Newsgroups: comp.parallel,comp.sys.transputer,comp.parallel.pvm Subject: Transputer, occam and parallel computing archive: NEW FILES Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: New files. See ADMIN article for other info. Keywords: transputer, occam, parallel, archive, anonymous ftp This is the new files list for the Transputer, occam and parallel computing archive. Please consult the accompanying article for administrative information and the various ways to access the files. [For experts: anonymous ftp from unix.hensa.ac.uk in /parallel] Dave NEW FEATURES ~~~~~~~~~~~~ * The complete contents of the INMOS (US) email archive server Contains the check programs, iservers, software bulletins and other programs. 8 Megabytes of files provided by Hugh Thomas of INMOS US. Many thanks to Hugh for this. * "Networks, Routers and Transputers" book as PostScript. See /parallel/books/IOS/nrat/Overview for details. * INMOS Preliminary Datasheets for the C101 and C104 DS-Link processors See /parallel/documents/inmos/ieee-hic/data/C101.ps.Z (330K) and C104.ps.Z (1.3M) * FULL TEXT INDEX A nightly full text index is now being generated, of all the individual Index files. This is probably the best way to find something by 'grepping' the file although it is very large. /parallel/index/FullIndex.ascii 283044 bytes /parallel/index/FullIndex.ascii.Z 90085 bytes (compressed) /parallel/index/FullIndex.ascii.gz 62922 bytes (gzipped) NEW FILES since 12th November 1993 (newest first) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /parallel/documents/inmos/archive-server The complete contents of the INMOS (US) email archive server (8 Megabytes of files) provided by Hugh Thomas of INMOS US. Many thanks to Hugh for this. 25th November 1993 /parallel/software/emacs-lisp/occam-mode.el Improved version of occam2 mode for GNU emacs by Jared Saia with "pretty good automatic indentation of occam code by syntax, auto capitilization of Occam keywords, the ability to comment regions of occam code in a way that the compiler accepts and some other random things." /parallel/software/emacs-lisp/old-occam-mode.el Older version of above. 24th November 1993 [Note to net.police: this is for informational purposes, not advertising] /parallel/documents/vendors/APR/ProductInfo/ Product information from ftp site netcom.com:/pub/forge/ProductInfo /parallel/documents/vendors/APR/ProductInfo/README Contents /parallel/documents/vendors/APR/ProductInfo/dpf.man.txt FORGE Magic Distributed Memory Parallelizer dpf manual page /parallel/documents/vendors/APR/ProductInfo/dpf_datasheet.txt FORGE Magic Distributed Memory Parallelizer dpf Datasheet /parallel/documents/vendors/APR/ProductInfo/forge90_datasheet.txt Interactive FORGE 90 Parallelizer Datasheet /parallel/documents/vendors/APR/ProductInfo/forgex_datasheet.txt FORGE Explorer Interactive Fortran Browser forgex Datasheet /parallel/documents/vendors/APR/ProductInfo/magic_datasheet.txt FORGE Magic Automatic Parallelizers Intro Datasheet /parallel/documents/vendors/APR/ProductInfo/news.txt Latest News from APR (November 1993) /parallel/documents/vendors/APR/ProductInfo/pricing.txt (nearly empty) /parallel/documents/vendors/APR/ProductInfo/xhpf.man.txt FORGE Magic HPF Parallelizer xhpf manual page /parallel/documents/vendors/APR/ProductInfo/xhpf_datasheet.txt FORGE Magic HPF Parallelizer xhpf Datasheet /parallel/conferences/concurrency-in-computational-logic Call for papers for Workshop on Concurrency in Computational Logic being held on 13th Decemeber 1993 at the Department of Computer Science, City University, London, United Kingdom. Deadline for papers: 30th November 1993. /parallel/documents/vendors/formal-systems/fdr.announce Announcement of availaility of academic licences for FDR (Failures Divergence Refinement) tool. The distribution is available to educational institutions for a nominal media charge. It is a refinement checking tool for CSP. /parallel/conferences/hpcn-europe-94-update Updated call for papers and details of the European Conference on High-Performance Computing and Networking (HPCN Europe) 1994 being held from April 18-20 1994 at the Sheraton Hotel & Towers Munich, Germany. Deadlines: Papers: 30th November 1993; Posters: 30th November 1993. /parallel/conferences/alp-plilp94.ps.Z /parallel/conferences/alp-plilp94.tex /parallel/conferences/alp-plilp94.txt Call for papers for ALP'94 and PLILP'94 - the Fourth International Conference on Algebraic and Logic Programming and the Sixth International Symposium on Programming Language Implementation and Logic Programming being held from 14th-16th September 1994 at Madrid, Spain. Deadlines: Papers: 28th February 1994; Acceptance: 10th May 1994; Final copy: 20th June 1994. /parallel/user-groups/hpff/hpff-II Call for attendance at the High Performance Fortran Forum II Kickoff Meeting being held from 13th-14th January 1994 at the Wyndham Greenspoint Hotel, Houston, Texas, USA. /parallel/documents/misc/X3H5-standard-comments Call for comments on the proposed ANSI standard: X3.252.199x, Parallel Processing Model for High Level Programming Languages /parallel/conferences/euromicro-workshop-on-par+dist-processing Revised details of registration and programme of the 2nd Euromicro Workshop on Parallel and Distributed processing from January 26-28th 1994 at the University of Malaga, Spain. /parallel/conferences/ieee-workshop-fault-tolerant-par-dist-systems Call for papers for 1994 IEEE Workshop on Fault-Tolerant Parallel and Distributed Systems being held from 13th-14th June 1994 at College Station, Texas, USA sponsored by the IEEE Computer Society Technical Committee on Fault-Tolerant Computing in cooperation with IFIP Working Group 10.4 and Texas A&M University and in conjunction with FTCS-24. Deadlines: Papers: 18th February 1994; Acceptance: 15th April 1994; Revised Paper: 15th August 1994; Panel Proposals: 15th March 1994 23rd November 1993 /parallel/reports/gatech/ossurvey.announcement Announcement of report below and call for comments by Bodhisattwa Mukherjee (bodhi@cc.gatech.edu> /parallel/reports/gatech/ossurvey.ps.Z A Survey of Multiprocessor Operating System Kernels (DRAFT) by Bodhisattwa Mukherjee (bodhi@cc.gatech.edu>, Karsten Schwan and Prabha Gopinath ABSTRACT: Multiprocessors have been accepted as vehicles for improved computing speeds, cost/performance, and enhanced reliability or availability. However, the added performance requirements of user programs and functional capabilities of parallel hardware introduce new challenges to operating system design and implementation. This paper reviews research and commercial developments in multiprocessor operating system kernels from the late 1970's to the early 1990's. The paper first discusses some common operating system structuring techniques and examines the advantages and disadvantages of using each technique. It then identifies some of the major design goals and key issues in multiprocessor operating systems. Issues and solution approaches are illustrated by review of a variety of research or commercial multiprocessor operating system kernels. /parallel/conferences/par-high-perf-apps Call for attendance at the Conference and Tutorial on Parallel High-Performance Applications being held from 15th-17th December 1993 at KTH, Stockholm, Sweden. Registration before 1st December 1993. /parallel/reports/announcements/update-propogation-in-galactica-net Announcement of Technical Report available for FTP: "Update Propagation in the Galactica Net Distributed Shared Memory Architecture" by A. Wilson, R. LaRowe, R. Ionta, R. Valentino, B. Hu, P. Breton and P. Lau of Center For High Performance Computing of Worcester Polytechnic Institute, Marlborough, MA, USA. /parallel/documents/vendors/APR/magic-dm-auto-parallelization-tools Announcement of MAGIC series of automatic parallelizing pre-compilers, FORGE Magic/DM for distributed memory systems and clustered workstations, and FORGE Magic/SM for shared memory parallel systems. /parallel/documents/vendors/APR/parallelizing-products Details of APR parallelizing products (FORGE 90, xHPF77) /parallel/documents/misc/DIMACS-implementation-challenge-III Call for participation in the Third DIMACS Internations Algorithm Implementation Challenge. This will take place between November 1993 and September 1994 by carrying out research projects related to the problem areas specified to present research papers at a DIMACS workshop in October 1994. /parallel/documents/misc/faster-messaging-shmem-multiprocs-video Details of Video tape on "Faster Messaging in Shared-Memory Multiprocessors", Optimizing Memory-Based Messaging for Scalable Shared Memory Multiprocessor Architectures, 8th September 1993, 71 minutes by Bob Kutter and David Cheriton. /parallel/user-groups/cray/conference-spring-1994 Call for papers and details of Cray Users Group Spring '94 conference being held from 14th-18th March 1994 at San Diego, California, USA. Deadlines: Papers and Posters: 10th December 1994. 22nd November 1993 /parallel/documents/vendors/intel/cmu-intel-iWarp-ftp-site Details of FTP site for publications relating to the Carnegie Mellon/ Intel SSD built iWarp parallel computers. /parallel/faqs/cm5-intel-mailing-lists Details of NAS/NASA mailing lists for the Thinking Machines CM5, Intel Paragon and Intel iPSC/860 machines. [ Added new area: /parallel/documents/benchmarks and moved other ] [ benchmark-related documents here. ] /parallel/documents/benchmarks/PAR93-SMP Announcement of PAR93 -- a benchmark suite designed to measure cache based RISC SMP system performance using well behaved codes that parallelize automatically. /parallel/software/announcements/distributed-c-development-environment Details of the Distributed C Development Environment developed at Technische Universitaet Muenchen, Germany by a group under Prof. Dr. J. Eickel. Available for networks of UNIX computers. Runs on Sun SPARCstations (SunOS), Hewlett Packard workstations (HP/UX), IBM workstations (AIX), Convex supercomputers (ConvexOS), IBM Workstations (AIX) and homogeneous and heterogeneous networks of the systems as mentioned above. Public domain software. /parallel/courses/parasoft-par-prog-course-linda "Introduction to Parallel Programming with Linda" course given by ParaSoft Corporation on 9th December 1993 at Florida State University, Tallahassee, Florida, USA immediately after the Cluster Workshop '93. 19th November 1993 /parallel/conferences/app-par-geoscience Call for papers for the European Geophysical Society XIX General Assembly symposium session EGS2 on Applications of Parallel Processing in Geoscience being held from 25th-29th April 1994 in Gernoble, France. Deadlines: Abstracts: 1st January 1994; Young Scientist and East European Awards: 15th December 1994. /parallel/conferences/spaa94 Call for papers for the 6th Annual ACM Symposium on Parallel Algorithms and Architectures being held from 27th-29th June 1994 at Cape May, New Jersey, USA sponsored by the ACM Special Interest Groups for Automata and Computability Theory (SIGACT) and Computer Architecture (SIGARCH) and organized in cooperation with the European Association for Theoretical Computer Science (EATCS). Deadlines: Papers: 21st January 1994; Acceptance: 15th March 1994; Final-copy: 8th April 1994. /parallel/books/introductory-parallel-books A summary of responses to a query about introductory books for parallel programming/algorithms. Replies by David Bader and Jason Moore 18th November 1993 OTHER HIGHLIGHTS ~~~~~~~~~~~~~~~~ * occam 3 REFERENCE MANUAL (draft) /parallel/documents/occam/manual3.ps.Z By Geoff Barrett of INMOS - freely distributable but copyrighted by INMOS and is a full 203 page book in the same style of the Prentice Hall occam 2 reference manual. Thanks a lot to Geoff and INMOS for releasing this. * TRANSPUTER COMMUNICATIONS (WoTUG JOURNAL) FILES /parallel/journals/Wiley/trcom/example1.tex /parallel/journals/Wiley/trcom/example2.tex /parallel/journals/Wiley/trcom/trcom.bst /parallel/journals/Wiley/trcom/trcom01.sty /parallel/journals/Wiley/trcom/trcom02.sty /parallel/journals/Wiley/trcom/trcom02a.sty /parallel/journals/Wiley/trcom/transputer-communications.cfp /parallel/journals/Wiley/trcom/Index /parallel/journals/Wiley/trcom/epsfig.sty LaTeX (.sty) and BibTeX (.bst) style files and examples of use for the forthcoming Wiley journal - Transputer Communications, organised by the World occam and Transputer User Group (WoTUG). See transputer-communications.cfp for details on how to submit a paper. * FOLDING EDITORS: origami, folding micro emacs /parallel/software/folding-editors/fue-original.tar.Z /parallel/software/folding-editors/fue-ukc.tar.Z /parallel/software/folding-editors/origami.zip /parallel/software/folding-editors/origami.tar.Z Two folding editors - origami and folding micro-emacs traditionally used for occam programming environments due to the indenting rules. Origami is an updated version of the folding editor distribution as improved by Johan Sunter of Twente, Netherlands. fue* are the original and UKC improved versions of folding micro-emacs. * T9000 SYSTEMS WORKSHOP REPORTS /parallel/reports/wotug/T9000-systems-workshop/* The reports from the T9000 Systems Workshop held at the University of Kent at Canterbury in October 1992. It contains ASCII versions of the slides given then with the permission of the speakers from INMOS. Thanks to Peter Thompson and Roger Shepherd for this. Subjects explained include the communications architecture and low-level communications, the processor pipeline and grouper, the memory system and how errors are handled. * THE PETER WELCH PAPERS /parallel/papers/ukc/peter-welch Eleven papers by Professor Peter Welch and others of the Parallel Processing Group at the Computing Laboratory, University of Kent at Canterbury, England related to occam, the Transputer and other things. Peter is Chairman of the World occam and Transputer User Group (WoTUG) * ISERVERS /parallel/software/inmos/iservers Many versions of the iserver- the normal version, one for Windows (WIserver), one for etherneted PCs (PCServer) and one for Meiko hardware. * MIRROR OF PARLIB /parallel/parlib Mirror of the PARLIB archive maintained by Steve Stevenson, the moderator of the USENET group comp.parallel. * UKC REPORTS /pub/misc/ukc.reports The internal reports of the University of Kent at Canterbury Computing Laboratory. Many of these contain parallel computing research. * NETLIB FILES /netlib/p4 /netlib/pvm /netlib/pvm3 /netlib/picl /netlib/paragraph /netlib/maspar As part of the general unix.hensa.ac.uk archive, there is a full mirror of the netlib files for the above packages (and the others too). Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.parallel,comp.sys.transputer,comp.parallel.pvm Subject: Transputer, occam and parallel computing archive: ADMIN Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: See NEW FILES article for details of new files Keywords: transputer, occam, parallel, archive, anonymous ftp This is the administrative information article for the Transputer, occam and parallel computing archive. Please consult the accompanying article for details of the new files and areas. In the last three weeks I've added another 7 megabytes of files to the archive at unix.hensa.ac.uk in /parallel. It currently contains over 73 Mbytes of freely distributable software and documents, in the Transputer, occam and parallel computing subject area. STATISTICS ~~~~~~~~~~ >4420 users accessed archive (1010 more than last time) >3250 Mbytes transfered (1590MB more) since the archive was started in early May. Top 10 files accessed, excluding Index files 867 /parallel/README 424 /parallel/pictures/T9000-schematic.ps.Z 390 /parallel/index/ls-lR* 356 /parallel/reports/misc/soft-env-net-report.ps.Z 326 /parallel/documents/inmos/occam/manual3.ps.Z 285 /parallel/books/ios/nrat/Overview 276 /parallel/books/ios/nrat/Chapter10a.ps.Z 258 /parallel/books/ios/nrat/Chapter1.ps.Z 255 /parallel/Changes 250 /parallel/books/ios/nrat/Introduction.ps.Z Unsurprisingly - the Networks, Routers and Transputers books has leapt into the top ten and accounts for a large proportion of the 1.3 gigabytes of files transferred. Heres a breakdown of the chapters by popularity: 334 /parallel/books/ios/nrat/Chapter10a.ps.Z 328 /parallel/books/ios/nrat/Overview 316 /parallel/books/ios/nrat/Chapter1.ps.Z 302 /parallel/books/ios/nrat/Introduction.ps.Z 302 /parallel/books/ios/nrat/Chapter10b.ps.Z 300 /parallel/books/ios/nrat/Chapter10c.ps.Z 286 /parallel/books/ios/nrat/Chapter10d.ps.Z 271 /parallel/books/ios/nrat/Appendices.ps.Z 258 /parallel/books/ios/nrat/Chapter2.ps.Z 246 /parallel/books/ios/nrat/Chapter3.ps.Z 243 /parallel/books/ios/nrat/Chapter11.ps.Z 238 /parallel/books/ios/nrat/Chapter8.ps.Z 235 /parallel/books/ios/nrat/Chapter7.ps.Z 235 /parallel/books/ios/nrat/Chapter6a.ps.Z 234 /parallel/books/ios/nrat/Chapter4.ps.Z 233 /parallel/books/ios/nrat/Chapter5.ps.Z 233 /parallel/books/ios/nrat/Chapter6b.ps.Z 221 /parallel/books/ios/nrat/Chapter9.ps.Z The two most popular chapters are Chapter 1 which is the "Transputers and Routers: Components for Concurrent Machines" and Chapter 10 which is "A Generic Architecture for ATM Systems". WHERE IS IT? ~~~~~~~~~~~ At the HENSA (Higher Education National Software Archive) UNIX archive. The HENSA/UNIX archive is accessible via an interactive browsing facility, called fbr as well as email, DARPA ftp, gopher and NI-FTP (Blue Book) services. For details, see below. HOW DO I FIND WHAT I WANT? ~~~~~~~~~~~~~~~~~~~~~~~~~~ The files are all located in /parallel and each directory contains a short Index file of the contents. If you want to check what has changed in between these postings, look at the /parallel/Changes file which contains the new files added. There is also a full text index available of all the files in /parallel/index/FullIndex.ascii but be warned - it is very large (over 200K). Compressed and gzipped versions are in the same directory. For those UNIX dweebs, there are output files of ls-lR in /parallel/index/ls-lR along with compressed and gzipped versions too. HOW DO I CONTACT IT? ~~~~~~~~~~~~~~~~~~~~ There are several ways to access the files which are described below - log in to the archive to browse files and retrieve them by email; transfer files by DARPA FTP over JIPS or use Blue Book NI-FTP. Logging in: ~~~~~~~~~~~ JANET X.25 network: call uk.ac.hensa.unix (or 000049200900 if you do not have NRS) JIPS: telnet unix.hensa.ac.uk (or 129.12.21.7) Once connected, use the login name 'archive' and your email address to enter. You will then be placed inside the fbr restricted shell. Use the help command for up to date details of what commands are available. Transferring files by FTP ~~~~~~~~~~~~~~~~~~~~~~~~ DARPA ftp from JIPS/the internet: site: unix.hensa.ac.uk (or 129.12.21.7) login: anonymous password: Use the 'get' command to transfer a file from the remote machine to the local one. When transferring a binary file it is important to give the command 'binary' before initiating the transfer. For more details of the 'ftp' command, see the manual page by typing 'man ftp'. The NI-FTP (Blue Book) request over JANET path-of-file from uk.ac.hensa.unix Username: guest Password: The program to do an NI-FTP transfer varies from site to site but is usually called hhcp or fcp. Ask your local experts for information. Transferring files by Email ~~~~~~~~~~~~~~~~~~~~~~~~~~ To obtain a specific file email a message to archive@unix.hensa.ac.uk containing the single line send path-of-file or 'help' for more information. Browsing and transferring by gopher ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Starting at the Root Minnesota Gopher gopher, select the following entries: 8. Other Gopher and Information Servers/ 5. Europe/ 37. United Kingdom/ 15. HENSA unix (National software archive, University of Kent), (UK)/ 3. The UNIX HENSA Archive at the University of Kent at Canterbury/ 9. Parallel Archive/ and browse the archive as normal. [The numbers are very likely to change] The short descriptions are abbreviated to fit on an 80 column display but the long ones can always be found under 'General Information.' (the Index files). Updates to the gopher tree follow a little behind the regular updates. We are also working on the software that generates so please bear with us if some of the areas are incomplete. COMING SOON ~~~~~~~~~~~ A better formatted bibliograpy of the IOS press (WoTUG, NATUG et al) books. A HUGE bibliography of occam papers, PhD theses and publications - currently about 2000 entries. A freely distributable occam compiler for workstations. A couple of free occam compiler for transputers. DONATIONS ~~~~~~~~~ Donations are very welcome. We do not allow uploading of files directly but if you have something you want to donate, please contact me. Dave Beckett Computing Laboratory, University of Kent at Canterbury, UK, CT2 7NF Tel: [+44] (0)227 764000 x7684 Fax: [+44] (0)227 762811 Email: djb1@ukc.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.parallel,comp.sys.transputer,comp.parallel.pvm Subject: Transputer, occam and parallel computing archive: ADMIN Organization: Computing Lab, University of Kent at Canterbury, UK. Summary: See NEW FILES article for details of new files Keywords: transputer, occam, parallel, archive, anonymous ftp This is the administrative information article for the Transputer, occam and parallel computing archive. Please consult the accompanying article for details of the new files and areas. In the last three weeks I've added another 7 megabytes of files to the archive at unix.hensa.ac.uk in /parallel. It currently contains over 73 Mbytes of freely distributable software and documents, in the Transputer, occam and parallel computing subject area. STATISTICS ~~~~~~~~~~ >4420 users accessed archive (1010 more than last time) >3250 Mbytes transfered (1590MB more) since the archive was started in early May. Top 10 files accessed, excluding Index files 867 /parallel/README 424 /parallel/pictures/T9000-schematic.ps.Z 390 /parallel/index/ls-lR* 356 /parallel/reports/misc/soft-env-net-report.ps.Z 326 /parallel/documents/inmos/occam/manual3.ps.Z 285 /parallel/books/ios/nrat/Overview 276 /parallel/books/ios/nrat/Chapter10a.ps.Z 258 /parallel/books/ios/nrat/Chapter1.ps.Z 255 /parallel/Changes 250 /parallel/books/ios/nrat/Introduction.ps.Z Unsurprisingly - the Networks, Routers and Transputers books has leapt into the top ten and accounts for a large proportion of the 1.3 gigabytes of files transferred. Heres a breakdown of the chapters by popularity: 334 /parallel/books/ios/nrat/Chapter10a.ps.Z 328 /parallel/books/ios/nrat/Overview 316 /parallel/books/ios/nrat/Chapter1.ps.Z 302 /parallel/books/ios/nrat/Introduction.ps.Z 302 /parallel/books/ios/nrat/Chapter10b.ps.Z 300 /parallel/books/ios/nrat/Chapter10c.ps.Z 286 /parallel/books/ios/nrat/Chapter10d.ps.Z 271 /parallel/books/ios/nrat/Appendices.ps.Z 258 /parallel/books/ios/nrat/Chapter2.ps.Z 246 /parallel/books/ios/nrat/Chapter3.ps.Z 243 /parallel/books/ios/nrat/Chapter11.ps.Z 238 /parallel/books/ios/nrat/Chapter8.ps.Z 235 /parallel/books/ios/nrat/Chapter7.ps.Z 235 /parallel/books/ios/nrat/Chapter6a.ps.Z 234 /parallel/books/ios/nrat/Chapter4.ps.Z 233 /parallel/books/ios/nrat/Chapter5.ps.Z 233 /parallel/books/ios/nrat/Chapter6b.ps.Z 221 /parallel/books/ios/nrat/Chapter9.ps.Z The two most popular chapters are Chapter 1 which is the "Transputers and Routers: Components for Concurrent Machines" and Chapter 10 which is "A Generic Architecture for ATM Systems". WHERE IS IT? ~~~~~~~~~~~ At the HENSA (Higher Education National Software Archive) UNIX archive. The HENSA/UNIX archive is accessible via an interactive browsing facility, called fbr as well as email, DARPA ftp, gopher and NI-FTP (Blue Book) services. For details, see below. HOW DO I FIND WHAT I WANT? ~~~~~~~~~~~~~~~~~~~~~~~~~~ The files are all located in /parallel and each directory contains a short Index file of the contents. If you want to check what has changed in between these postings, look at the /parallel/Changes file which contains the new files added. There is also a full text index available of all the files in /parallel/index/FullIndex.ascii but be warned - it is very large (over 200K). Compressed and gzipped versions are in the same directory. For those UNIX dweebs, there are output files of ls-lR in /parallel/index/ls-lR along with compressed and gzipped versions too. HOW DO I CONTACT IT? ~~~~~~~~~~~~~~~~~~~~ There are several ways to access the files which are described below - log in to the archive to browse files and retrieve them by email; transfer files by DARPA FTP over JIPS or use Blue Book NI-FTP. Logging in: ~~~~~~~~~~~ JANET X.25 network: call uk.ac.hensa.unix (or 000049200900 if you do not have NRS) JIPS: telnet unix.hensa.ac.uk (or 129.12.21.7) Once connected, use the login name 'archive' and your email address to enter. You will then be placed inside the fbr restricted shell. Use the help command for up to date details of what commands are available. Transferring files by FTP ~~~~~~~~~~~~~~~~~~~~~~~~ DARPA ftp from JIPS/the internet: site: unix.hensa.ac.uk (or 129.12.21.7) login: anonymous password: Use the 'get' command to transfer a file from the remote machine to the local one. When transferring a binary file it is important to give the command 'binary' before initiating the transfer. For more details of the 'ftp' command, see the manual page by typing 'man ftp'. The NI-FTP (Blue Book) request over JANET path-of-file from uk.ac.hensa.unix Username: guest Password: The program to do an NI-FTP transfer varies from site to site but is usually called hhcp or fcp. Ask your local experts for information. Transferring files by Email ~~~~~~~~~~~~~~~~~~~~~~~~~~ To obtain a specific file email a message to archive@unix.hensa.ac.uk containing the single line send path-of-file or 'help' for more information. Browsing and transferring by gopher ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Starting at the Root Minnesota Gopher gopher, select the following entries: 8. Other Gopher and Information Servers/ 5. Europe/ 37. United Kingdom/ 15. HENSA unix (National software archive, University of Kent), (UK)/ 3. The UNIX HENSA Archive at the University of Kent at Canterbury/ 9. Parallel Archive/ and browse the archive as normal. [The numbers are very likely to change] The short descriptions are abbreviated to fit on an 80 column display but the long ones can always be found under 'General Information.' (the Index files). Updates to the gopher tree follow a little behind the regular updates. We are also working on the software that generates so please bear with us if some of the areas are incomplete. COMING SOON ~~~~~~~~~~~ A better formatted bibliograpy of the IOS press (WoTUG, NATUG et al) books. A HUGE bibliography of occam papers, PhD theses and publications - currently about 2000 entries. A freely distributable occam compiler for workstations. A couple of free occam compiler for transputers. DONATIONS ~~~~~~~~~ Donations are very welcome. We do not allow uploading of files directly but if you have something you want to donate, please contact me. Dave Beckett Computing Laboratory, University of Kent at Canterbury, UK, CT2 7NF Tel: [+44] (0)227 764000 x7684 Fax: [+44] (0)227 762811 Email: djb1@ukc.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.lang.functional,comp.parallel From: schreine@risc.uni-linz.ac.at (Wolfgang Schreiner) Subject: Parallel Functional Programming Bibliography (2nd ed) Nntp-Posting-Host: melmac.risc.uni-linz.ac.at Organization: RISC-Linz, Johannes Kepler University, Linz, Austria An updated version of my annotated bibliography on parallel functional programming (including the abstracts of most papers) is available by anonymous ftp from ftp.risc.uni-linz.ac.at (193.170.36.100) in pub/reports/parlab/pfpbib2.ps.Z (BibTeX sources in pfpbib2.tar.Z). The differences to the previous edition are not large but significant; the bibliography now also includes * The nested data-parallel language NESL (thanks to Guy Blelloch, Carnegie Mellon University), * The parallel equational programming language EPL (thanks to Bolek Szymanski, Rensselaer Polytechnic Institute), * New citations and corrections on SISAL (thanks to Rod Oldehoeft, Colorado State University), * New citations on the skeleton approach (thanks to Fethi A. Rabhi, University of Hull, and Tore Bratvolt, Heriot-Watt University). * Citations on work done at the GMD in Bonn on parallel functional programming (thanks to Werner Kluge, GMD). Any comments, corrections, or supplements are welcome. Wolfgang -------------------------------------------------------- Wolfgang Schreiner Research Institute for Symbolic Computation (RISC-Linz) Johannes Kepler University, A-4040 Linz, Austria Tel: +43 7236 3231 66 Fax: +43 7236 3231 30 Email: schreine@risc.uni-linz.ac.at -------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: dfk@wildcat.dartmouth.edu (David Kotz) Subject: I/O on parallel machines Message-ID: Followup-To: comp.sys.super,comp.parallel Summary: archives Originator: dfk@wildcat.dartmouth.edu Keywords: parallel I/O, dartmouth archive, example codes Sender: news@dartvax.dartmouth.edu (The News Manager) Organization: Dartmouth College, Hanover, NH Date: Mon, 6 Dec 1993 13:01:43 GMT In article <2dqi2f$3bu@lll-winken.llnl.gov> yates@rafael.llnl.gov (Kim Yates) writes: From: yates@rafael.llnl.gov (Kim Yates) Newsgroups: comp.sys.super Date: 4 Dec 1993 17:40:31 GMT I'm a researcher at Livermore rather loosely associated with the National Storage Laboratory here, and I'm trying to evaluate the current state of the art. I'm interested in all aspects of parallel I/O: hardware, software, performance, programmability, portability, you name it! I have a small collection of parallel I/O examples for ftp at cs.dartmouth.edu in pub/pario. There is also a bibliography there too. I hope the community finds this archive useful, and suggests contributions to the archive. We may want to add an "anecdotes" section as well. dave -- ----------------- Mathematics and Computer Science Dartmouth College, 6211 Sudikoff Laboratory, Hanover NH 03755-3510 email: David.Kotz@Dartmouth.edu or dfk@cs.dartmouth.edu Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: dfk@wildcat.dartmouth.edu (David Kotz) Subject: I/O on parallel machines Message-ID: Followup-To: comp.sys.super,comp.parallel Summary: archives Originator: dfk@wildcat.dartmouth.edu Keywords: parallel I/O, dartmouth archive, example codes Sender: news@dartvax.dartmouth.edu (The News Manager) Organization: Dartmouth College, Hanover, NH Date: Mon, 6 Dec 1993 13:01:43 GMT In article <2dqi2f$3bu@lll-winken.llnl.gov> yates@rafael.llnl.gov (Kim Yates) writes: From: yates@rafael.llnl.gov (Kim Yates) Newsgroups: comp.sys.super Date: 4 Dec 1993 17:40:31 GMT I'm a researcher at Livermore rather loosely associated with the National Storage Laboratory here, and I'm trying to evaluate the current state of the art. I'm interested in all aspects of parallel I/O: hardware, software, performance, programmability, portability, you name it! I have a small collection of parallel I/O examples for ftp at cs.dartmouth.edu in pub/pario. There is also a bibliography there too. I hope the community finds this archive useful, and suggests contributions to the archive. We may want to add an "anecdotes" section as well. dave -- ----------------- Mathematics and Computer Science Dartmouth College, 6211 Sudikoff Laboratory, Hanover NH 03755-3510 email: David.Kotz@Dartmouth.edu or dfk@cs.dartmouth.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: ukeller@pallas-gmbh.de (Udo Keller) Newsgroups: comp.parallel,comp.parallel.pvm,comp.arch,comp.sys.super Subject: European MPI Workshop - First Reposting Date: 6 Dec 1993 14:02:44 +0100 Organization: PALLAS GmbH Reply-To: mpi-ws@pallas-gmbh.de Keywords: MPI,standards,MPP,SPP X-Newsreader: mxrn 6.18-6 First Announcement, first reposting E U R O P E A N M P I W O R K S H O P MPI, the new standard for message-passing programming has been presented at the Supercomputer'93 in Portland recently. The MPI (Message-Passing Interface) standard has been defined by the transatlantic MPI Committee. The European particpation in the MPI Committee was funded by the ESPRIT project PPPE. The MPI Committee is now waiting for public comments until February 11. European software developers are using message-passing systems for many years. With their long experience in programming message-passing parallel computers, European software developers should be actively involved in the final round of the MPI definition. It is the aim of the European MPI Workshop to organize the dissemination of MPI in Europe and to collect the European developers' view on MPI. Date: January 17/18 1994 Location: INRIA Sophia Antipolis (near Nice) Organized by: PALLAS, GMD, INRIA (on behalf of the PPPE project) Registration fee: 70 ECU or 450 FF (to be paid cash at registration) Who should attend: European software developers with experience in parallel computing, preferably message passing. Participants from universities, research organizations, and industry are welcome. The maximum number of participants is 80. Agenda: January 17 afternoon: Presentation of the MPI message passing standard January 18 morning: Feedback of European software developers on MPI After the workshop the MPI Committee will have its European meeting. If you want to participate or need more information, please contact PALLAS mpi-ws@pallas-gmbh.de You will receive the MPI standard document. Details on speakers, transport, hotels etc. will be sent out later. -- ---------------------------------------------------------------------------- Udo Keller phone : +49-2232-1896-0 PALLAS GmbH fax : +49-2232-1896-29 Hermuelheimer Str.10 direct line: +49-2232-1896-15 D-50321 Bruehl email : ukeller@pallas-gmbh.de ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: assaf@csa.technion.ac.il (Assaf Schuster) Subject: PPL Special issue on Dynamically Reconfigurable Architectures Message-ID: Sender: news@discus.technion.ac.il (News system) Nntp-Posting-Host: csa.cs.technion.ac.il Organization: Technion, Israel Inst. Tech., Haifa Date: Mon, 6 Dec 1993 13:15:46 GMT P A R A L L E L P R O C E S S I N G L E T T E R S Special Issue on Dynamically Reconfigurable Architectures Papers are solicited for a special issue of Parallel Processing Letters covering aspects of dynamically reconfigurable architectures for parallel computation. Such architectures typically allow for the network topology to be configured by allowing processors to select from a set of local communication patterns. This issue will cover all topics related to the theory, algorithms, structure, and implementation of architectures which support a physical switching of communication patterns during a computation. The special issue is scheduled to be published in Spring 1995. The topics of interest include, but are not limited to: Models Implementations and Systems Complexity Sorting and Packet Routing Scalability Embedding of Fixed Topologies Problem Solving Paradigms Image Processing, Graphics Algorithms Optical Architectures Four copies of complete manuscripts, subject to a hard limit of 12 pages (including figures), should be sent to Dr. Assaf Schuster (see address below) by June 20, 1994. Manuscripts must conform to the normal submission requirements of Parallel Processing Letters. Russ Miller Assaf Schuster Editor PPL Guest Editor PPL Dept. of Computer Science Dept. of Computer Science State University of New York at Buffalo Technion 226 Bell Hall, Buffalo, NY 14260 Haifa 32000 USA ISRAEL miller@cs.buffalo.edu assaf@cs.technion.ac.il -------------------------- CUT HERE ------------------------------ \documentstyle[12pt]{article} \setlength{\textwidth}{6 in} \setlength{\textheight}{9.25 in} \setlength{\oddsidemargin}{0.00 in} \setlength{\topmargin}{-0.5in} \setlength{\parskip}{7.5pt} \baselineskip 15pt \pagestyle{empty} \begin{document} \vspace*{1cm} \begin{center} {\Large {\it Parallel Processing Letters}\\ \vspace{0.3cm} Special Issue on\\ \vspace{0.3cm} Dynamically Reconfigurable Architectures} \end{center} \vspace{1cm} Papers are solicited for a special issue of {\em Parallel Processing Letters} covering aspects of dynamically reconfigurable architectures for parallel computation. Such architectures typically allow for the network topology to be configured by allowing processors to select from a set of local communication patterns. This issue will cover all topics related to the theory, algorithms, structure, and implementation of architectures which support a physical switching of communication patterns during a computation. The special issue is scheduled to be published in Spring 1995. The topics of interest include, but are not limited to: \begin{center} \noindent \begin{tabular}{ll} Models & Implementations and Systems \\ Complexity & Sorting and Packet Routing \\ Scalability & Embedding of Fixed Topologies \\ Problem Solving Paradigms & Image Processing, Graphics \\ Algorithms & Optical Architectures \\ \end{tabular} \end{center} Four copies of complete manuscripts, subject to a hard limit of 12 pages (including figures), should be sent to Dr. Assaf Schuster (see address below) by June 20, 1994. Manuscripts must conform to the normal submission requirements of Parallel Processing Letters. \begin{center} \small \noindent \begin{tabular}{ll} Russ Miller & Assaf Schuster \\ Editor PPL & Guest Editor PPL \\ Dept. of Computer Science & Dept. of Computer Science \\ State University of New York at Buffalo & Technion \\ 226 Bell Hall, Buffalo, NY 14260 & Haifa 32000 \\ USA & ISRAEL \\ miller@cs.buffalo.edu & assaf@cs.technion.ac.il \end{tabular} \normalsize \end{center} \end{document} Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.databases.object,comp.databases,comp.parallel From: anna@deis35.cineca.it (Anna Ciampolini) Subject: ECOOP94 Call for Demo Apparently-To: comp-parallel@uunet.uu.net Sender: fpst@hubcap.clemson.edu (Steve Stevenson) Organization: DEIS ... le news Date: Fri, 3 Dec 93 15:36:07 GMT Call for Demonstration Proposals - ECOOP '94 The Eighth European Conference on Object-Oriented Programming Bologna, Italy July 6-8, 1994 The 1994 Eighth European Conference on Object-Oriented Programming will be held on July 6-8, 1994, in Bologna, Italy. The Conference aims to bring together researchers and practitioners from academia and industry to discuss and exchange new developments in object-oriented languages, systems and methods. A Demonstration session is planned in parallel with the Conference sessions. Demonstrations of object-oriented software are invited to illustrate innovative ideas. Candidate demos should: * illustrate innovative object-oriented concepts; * use advanced technologies; * present non-commercial products (an exhibition session for commercial object-oriented software is also planned). Proposals for demonstrations should be approximately three pages in length, and should contain: * a description of the demo, identifying the specific technical issues that will be addressed; * a discussion of the relevance of the demo for the object-oriented programming community; * the hardware/software requirements for the demonstration. Acceptance of demos will be decided on the basis of their technical merit, scientific relevance and novelty; it will be also constrained by the organization capability to furnish the required hardware/software. Proposals must be submitted no later than April 1, 1994 to: Anna Ciampolini ECOOP'94 Demonstration Chair DEIS - Universita' di Bologna Viale Risorgimento 2 I-40136 Bologna, Italy Tel.: +39 51 6443033 Fax : +39 51 6443073 E-mail: anna@deis33.cineca.it Acceptance will be notified no later than May 16, 1994. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: djb1@ukc.ac.uk Subject: Re: FAQ on message passing packages Date: Mon, 06 Dec 93 14:30:45 GMT Organization: Computing Lab, University of Kent at Canterbury, UK. References: <1993Dec3.132938.28817@hubcap.clemson.edu> In article <1993Dec3.132938.28817@hubcap.clemson.edu>, Timothy Burns wrote: >Unfortunatly, comp.parallel does not have an FAQ on rtfm.mit.edu. >I am therefore cluttering the net with my question: > > Does anyone where I can find the tcgmsg package? > >I am also looking for a whole list of packages. > >Tim Burns email: tim@osiris.usi.utah.edu The FAQs area of the parallel computing archive site I run, has several lists of tools for different subjects. It's possible I could have what you want. site: unix.hensa.ac.uk directory: /parallel/faqs total 343 -rw-r--r-- 1 djb1 3797 Nov 19 14:17 Index -rw-r--r-- 1 djb1 24916 Nov 29 15:12 PVM -rw-r--r-- 1 djb1 9355 Nov 2 16:32 PVM-technical -rw-r--r-- 1 djb1 8172 Nov 2 16:10 classification-of-parallel-algorithms -rw-r--r-- 1 djb1 1801 Sep 3 16:51 clusters-of-workstations -rw-r--r-- 1 djb1 3174 Nov 5 17:00 cm5-intel-mailing-lists -rw-r--r-- 1 djb1 25678 Sep 14 12:10 dynamic-load-balancing-farming -rw-r--r-- 1 djb1 5738 Oct 25 11:41 industrial-par-tools -rw-r--r-- 1 djb1 4987 Sep 15 14:04 ksr-ipsc860-papers -rw-r--r-- 1 djb1 2887 Nov 8 15:28 linux-and-transputers -rw-r--r-- 1 djb1 4284 Sep 3 16:48 load-balancing-simd -rw-r--r-- 1 djb1 7339 Sep 28 11:09 mesh-of-buses -rw-r--r-- 1 djb1 3657 Oct 25 16:21 message-passing-simulators -rw-r--r-- 1 djb1 4209 Sep 14 10:19 parallel-C++-classes-1 -rw-r--r-- 1 djb1 2637 Sep 14 10:18 parallel-C++-classes-2 -rw-r--r-- 1 djb1 1779 Aug 24 10:19 parallel-C++-extensions -rw-r--r-- 1 djb1 1937 Oct 25 16:41 parallel-Fourier-transforms -rw-r--r-- 1 djb1 8613 Oct 1 10:24 parallel-data-compression -rw-r--r-- 1 djb1 22605 Sep 28 11:09 parallel-garbage-collection -rw-r--r-- 1 djb1 44286 Sep 14 11:34 parallel-genetic-algorithms -rw-r--r-- 1 djb1 14472 Sep 17 14:21 scalability -rw-r--r-- 1 djb1 5715 Aug 25 10:00 systems-for-par-prog-development -rw-r--r-- 1 djb1 15559 Sep 3 10:05 tools-for-clustered-workstations -rw-r--r-- 1 djb1 34195 Oct 5 16:32 transputer-FAQ -rw-r--r-- 1 djb1 48251 Oct 5 16:32 transputer-compilers -rw-r--r-- 1 djb1 25828 Oct 5 16:32 transputer-ftp-sites ... and indeed, grepping in those files I found the following in 'tools-for-clustered-workstations': Name: TCGMSG (Theoretical Chemistry Group Message Passing System) From: Argonne National Laboratory Works w/on: Heterogeneous computers Languages: C and Fortran Avail at: ftp.tcg.anl.gov:pub/tcgmsg Contact: rj_harrison@pnl.gov (Robert J. Harrison) Desc.: Like PVM and p4. Comes with set of example 'chemistry' applications. Predecessor to p4. Dave Beckett Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Newsgroups: comp.parallel,csd.umiacs.parallel Subject: Kendall Square Research In The News Date: 6 Dec 1993 16:22:08 GMT Organization: Professional Student, University of Maryland, College Park Nntp-Posting-Host: tea.eng.umd.edu I saw this in this morning's Washington Post (12/6/93) business section: DEMOTIONS AT KENDALL SQUARE Kendall Square Research Corp., a Waltham, Mass., supercomputer company, demoted three top officials after revelations that its income had been misreported. The company said that revenue for the first nine months of 1993 will be $10.6 million, less than half the $24.7 million it had reported. The company also revised its 1992 revenue to $16.3 million, compared with the $20.5 million previously reported, and its 1992 net loss to $17.2 million compared with $12.7 million previously reported. Founder Henry Burkhardt III was stripped of his chief executive office title but will remain president. The executive vice president and chief financial officer "will cease to be officers of the company." But they will remain employees at full pay. Chairman William I. Koch, chairman and the company's largest shareholder, assumed the title of CEO until a new one is found. ----------------------------------- David A. Bader Electrical Engineering Department A.V. Williams Building - Room 3142-A University of Maryland College Park, MD 20742 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: richards@wrl.EPI.COM (Fred Richards) Subject: Data-parallel languages (non-CM C*)? Date: 6 Dec 1993 16:38:47 GMT Organization: Entropic Research Laboratory, Washington DC Reply-To: richards@wrl.EPI.COM Is any data-parallel language emerging as a standard, much as PVM seems to be as a message-passing library? Does C*, or something *very* similar, run on any of the other MPP machines (Intel, nCube, MasPar, etc.) Fred Richards Entropic Research Lab 600 Pennsylvania Ave. SE, Suite 202 Washington, DC 20003 (202) 547-1420 richards@wrl.epi.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 6 Dec 1993 18:10:24 -0500 From: takis@poobah.wellesley.edu (Takis Metaxas) Subject: 6th IEEE International Conference on Tools with Artificial Intelligence CALL FOR PAPERS 6th IEEE International Conference on Tools with Artificial Intelligence November 6-9, 1994 Hotel Intercontinental New Orleans, Louisiana This conference is envisioned to foster the transfer of ideas relating to artificial intelligence among academics, industry, and goverment agencies. It focuses on methodologies which can aid the development of AI, as well as the demanding issues involved in turning these methodologies to practical tools. Thus, this conference encompasses the technical aspects of specifying, developing, and evaluating theoretical and applied mechanisms which can serve as tools for developing intelligent systems and pursuing artificial intelligence applications. Focal topics of interest include, but are not limited to, the following: * Machine Learning, Computational Learning * Artificial Neural Networks * Uncertainty Management, Fuzzy Logic * Distributed and Cooperative AI, Information Agents * Knowledge Based Systems, Intelligent Data Bases * Intelligent Strategies for Scheduling and Planning * AI Algorithms, Genetic Algorithms * Expert Systems * Natural Language Processing * AI Applications (Vision, Robotics, Signal Processing, etc.) * Information Modeling, Reasoning Techniques * AI Languages, Software Engineering, Object-Oriented Systems * Logic and Constraint Programming * Strategies for AI development * AI tools for Biotechnology INFORMATION FOR AUTHORS There will be both academic and industry tracks. A one day workshop (November 6th) precedes the conference (November 7-9). Authors are requested to submitt original papers to the program chair by April 20, 1994. Five copies (in English) of double-spaced typed manuscript (maximum of 25 pages) with an abstract are required. Please attach a cover letter indicating the conference track (academic/industry) and areas (in order of preference) most relevant to the paper. Include the contact author's postal address, e-mail address, and telephone number. Submissions in other audio-visual forms are acceptable only for the industry track, but they must focus on methodology and timely results on AI technological applications and problems. Authors will be notified of acceptance by July 15, 1994 and will be given instruc- tions for camera ready papers at that time. The deadline for camera ready papers will be August 19, 1994. Outstanding papers will be eli- gible for publication in the International Journal on Artificial Intelligence Tools. Submit papers and panel proposals by April 20, 1994 to the Program Chair: Cris Koutsougeras Computer Science Department Tulane University New Orleans, LA 70118 Phone: (504) 865-5840 e-mail: ck@cs.tulane.edu Potential panel organizers please submit a subject statement and a list of panelists. Acceptances of panel proposals will be announced by June 30, 1994. A computer account (tai@cs.tulane.edu) is running to provide automatic information responses. You can obtain the electronic files for the CFP, program, registration form, hotel reservation form, and general conference information. For more information please contact: Conference Chair Steering Committee Chair Jeffrey J.P. Tsai Nikolaos G. Bourbakis Dept. of EECS (M/C 154) Dept. of Electrical Engineering 851 S. Morgan Street SUNY at Binghamton University of Illinois Binghamton, NY 13902 Chicago, IL 60607-7053 Tel: (607)777-2165 (312)996-9324 e-mail: bourbaki@bingvaxu.cc.binghamton.edu (312)413-0024 (fax) tsai@bert.eecs.uic.edu Program Chair : Cris Koutsougeras, Tulane University Registration Chair : Takis Metaxas, (617) 283-3054, e-mail: takis@poobah.wellesley.edu Local Arrangements Chair : Akhtar Jameel, e-mail: jameel@cs.tulane.edu Workshop Organizing Chair : Mark Hovy Industrial Track Vice Chairs : Steven Szygenda, Raymond Paul Program Vice Chairs : Machine Learning: E. Kounalis Computational Learning: J. Vitter Uncertainty Management, Fuzzy Logic: R. Goldman Knowledge Based Systems, Intelligent Data Bases: M. Ozsoyoglu AI Algorithms, Genetic Algorithms: P. Marquis Natural Language Processing: B. Manaris Information Modeling, Reasoning Techniques: D. Zhang Logic and Constraint Programming: A. Bansal AI Languages, Software Engineering, Object-Oriented Systems: B. Bryant Artificial Neural Networks: P. Israel Distributed and Cooperative AI, Information Agents: C. Tsatsoulis Intelligent Strategies for Scheduling and Planning: L. Hoebel Expert Systems: F. Bastani AI Applications (Vision, Robotics, Signal Processing, etc.): C. T. Chen AI tools for Biotechnology: M. Perlin Strategies for AI development: U. Yalcinalp Publicity Chairs : R. Brause, Germany Mikio Aoyama, Japan Benjamin Jang, Taiwan Steering Committee : Chair: Nikolaos G. Bourbakis, SUNY-Binghamton John Mylopoulos, University of Toronto, Ontario, Canada C. V. Ramamoorthy, University of California-Berkeley Jeffrey J.P. Tsai, University of Illinois at Chicago Wei-Tek Tsai, University of Minnesota Benjamin W. Wah, University of Illinois at Urbana Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Graham Jones Subject: paper wanted Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh Date: Tue, 7 Dec 1993 10:19:13 GMT Apparently-To: comp-parallel@uknet.ac.uk I am trying to get hold of a Hewlett Packard technical report HPL-91-11. Does anyone know of either 1) an ftp site for HP 2) someone I could email to obtain the report Thank you -- Graham P. Jones Address: Room 3419, The University Of Edinburgh, Edinburgh EH9 3JY Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rl@uni-paderborn.de (Reinhard Lueling) Subject: special purpose parallel architectures Date: 7 Dec 1993 10:30:18 +0100 Organization: Universitaet Paderborn, Germany We are actually looking for special purpose parallel processing systems which have been realized or which are actually planned for realization. I know that there are some special purpose parallel processing systems for graphic applications and for VLSI simulation (I think there is an IBM simulation engine available) Do you have more informations ? Do you think its worthwile to build special purpose hardware ? Which applications might require specialiced hardware ? There may be different ways to built a special purpose system: (different levels of speciality) 1. take available processors (sparcs, intel, transputers ....) and connect them by specialiced routing chips and additional hardware in a dedicated way to fulfill the special requirements of the application. 2. design specialiced processors and routing chips and connect them. Any pointers to literature would help. I will post a summary afterwards. thanks, Reinhard Reinhard Lueling University of Paderborn Dept. of Computer Science Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Winkelmann@rrz.Uni-Koeln.DE (Volker Winkelmann) Subject: Conference "Trends in Workstation Cluster Computing", March 2-4, 1994 Date: 6 Dec 1993 15:37:46 GMT Organization: Regional Computing Center, University of Cologne Nntp-Posting-Host: rs1.rrz.uni-koeln.de Keywords: DCE/DME, Distributed Systems, Distributed Management **** Second announcement / call for participation for the conference **** ------------------------------------------- | Trends in Workstation Cluster Computing | | | | | | University of Cologne (Germany) | | March 2-4, 1994 | ------------------------------------------- Goals of the Conference ----------------------- During the last few years workstation clusters have gained considerable importance in industrial and academic computing because of their attractive price/performance ratio. Workstation clusters steadily replace mainframes and even supercomputers. System administrators face a growing number of hardware and software components, and the number of users of workstation clusters increases. The conference "Trends in Workstation Cluster Computing" intends to present state-of-the-art solutions of typical problems inherent in workstation cluster management in the areas of operating systems, tools, services and applications. Intended audience ----------------- o computer scientists who are looking for better solutions to these problems, o system analysts und system managers, who need to get their distributed systems running efficiently, and o users who need to find long-living and efficient solutions to their problems. Central topics -------------- o scheduling and load balancing in distributed systems, o services in distributed systems, o availability and security of services, o parallel computing in distributed systems. Tutorial -------- On March 2, a one-day tutorial on Distributed Computing Environment (DCE) and Distributed Management Environment (DME) will set the stage for the subsequent presentations. Conference ---------- Talks on AMOEBA, ATHENA, BIRLIX, CHORUS, NIS+, NQS/EXEC, OSF/DME, PEACE, PLAN 9, SHIFT, TIVOLI and others are planned. On March 3, a series of invited talks and discussions will give an overview of the state of the art in distributed operating systems and distributed workstation management software. On March 4, experts from industry and academia will report about their experience in workstation cluster computing. Conference languages will be German and English. Panel discussion ---------------- On March 4, a panel discussion of workstation vendors on "Future Trends in Workstation Cluster Computing" will conclude the conference. Agenda / Technical Program -------------------------- **** Wednesday, March 2, 1994 **** 9:15 Welcome Dr. W. Trier, Regionales Rechenzentrum, Universitaet zu Koeln Geschaeftsfuehrender Direktor TUTORIAL ON DCE/DME M. Santifaller, santix software, Muenchen H. Streppel, Digital Equipment, Muenchen 9:30 Einleitung H. Streppel, Digital Equipment, Muenchen 10:00 Einfuehrung in Distributed Computing anhand von DCE M. Santifaller, santix software, Muenchen 11:00 Break 11:30 Network Computing Modelle in der Praxis M. Santifaller, santix software, Muenchen 12:15 Distributed System Management, Anforderungen und Technologien I M. Santifaller, santix software, Muenchen 13:00 Break for Lunch 14:30 Distributed System Management, Anforderungen und Technologien II H. Streppel, Digital Equipment, Muenchen 15:30 Ueberblick ueber DME, Architektur, Status, Ausblick H. Streppel, Digital Equipment, Muenchen 16:00 Break 16:30 Sicherheit in DCE/DME M. Santifaller, santix software, Muenchen 17:30 Discussion 18:00 End of Tutorial **** Thursday, March 3, 1994 **** 9:00 Welcome Prof. Dr. P. Speth, Prorektor der Universitaet zu Koeln, Institut fuer Geophysik und Meteorologie Prof. Dr. G. Hohlneicher, Nebenamtlicher Direktor des RRZK, Institut fuer Physikalische Chemie II SESSION - "DISTRIBUTED OPERATING SYSTEMS" Chair: Prof. Dr. U. Trottenberg, GMD, Sankt Augustin 9:30 Plan 9 from Bell Labs R. Pike, Bell Laboratories, Murray Hill, NJ 10:30 Break 11:00 BIRLIX: Eine Sicherheitsarchitektur fuer verteilte Systeme Dr. W. Kuehnhauser, GMD, St. Augustin 11:30 Chorus/MiX: The Single System Image Approach Dr. M. Rozier, Chorus systemes, St. Quentin e.Y. 12:00 PEACE: Der Weg zur Hoechstleistung F. Schoen, GMD-FIRST, Berlin 12:30 Break for Lunch 14:00 The Distributed Operating System AMOEBA (Current Status) L. P. van Doorn, Vrije Universiteit Amsterdam 15:00 Break SESSION - "DISTRIBUTED WORKSTATION MANAGEMENT" Chair: L. B. Karle, Digital Equipment, Muenchen 15:30 CONNECT: Queue (NQS) J. Milles, Sterling Software, Dallas, Texas 16:00 NIS+: Distributed Naming Service S. Holzaht, Sun Microsystems, Ratingen 16:30 Break 17:00 Tivoli Management Environment M. Santifaller, santix software, Muenchen 17:30 DME: Zu anspruchsvoll fuer den Markt? H. Streppel, Digital Equipment, Muenchen 18:00 End of Sessions 19:00 Conference Dinner **** Friday, March 4, 1994 **** SESSION - "EXPERIENCE AND APPLICATIONS I" Chair: Prof. Dr. G. Hohlneicher, Inst. f. Phys. Chemie II, Koeln 9:00 Athena Experience - Mainframe to Fully Distributed Dr. M. J. Clark, Essex University, Colchester 10:00 Workstation-Gruppen in der KFA O. Buechner, Forschungszentrum Juelich 10:30 Break 11:00 Heterogene Workstations ohne Chaos Dr. H. Richter, LRZ, Muenchen 11:30 SHIFT: The Evolution of Workstation Cluster Computing at CERN L. Robertson, CERN, Genf 12:00 IBM Load Leveler: Erfahrungen auf einem heterog. WS Cluster in der Comp. Chemistry Dr. A. Zywietz, Bayer AG, Leverkusen 12:30 Break for Lunch SESSION - "EXPERIENCE AND APPLICATIONS II" Chair: Dr. H.-M. Wacker, DLR, Koeln 14:00 Trends in Workstation Cluster Computing at Fermilab Dr. F. J. Rinaldo, Fermi Laboratory, Batavia, Il 14:30 Numerische Anwendungen auf einem Workstation Cluster der GMD H. Schwichtenberg, GMD, St. Augustin 15:00 Erfahrungen mit CODINE auf einem Workstation Cluster der VW AG Dr. W. Koenig, VW AG, Wolfsburg 15:30 Break 16:00 System-Management-Dienste fuer die Universitaet? C. Kalle, RRZK, Universitaet zu Koeln 16:30 Panel Discussion on Trends in Workstation Cluster Computing Digital Equipment GmbH Hewlett-Packard GmbH IBM Deutschland Informationssysteme GmbH SiliconGraphics Computer Systems GmbH Siemens Nixdorf Informationssysteme AG Sun Microsystems GmbH 18:00 End of Conference Exhibitions and Demonstrations ------------------------------ During all three days hardware and software vendors will present their solutions concerning workstation cluster computing. During the conference terminals for Internet access will be available to the participants. Conference Venue ---------------- University of Cologne Lecture Hall Albertus-Magnus-Platz D-50931 Cologne Federal Republic of Germany Registration Fee ---------------- Tutorial 250 DM Conference 150 DM Tutorial and Conference 350 DM Members of universities and other educational institutions: Tutorial 200 DM, students 100 DM Conference 100 DM, students 50 DM Tutorial and Conference 250 DM, students 100 DM Hotel Reservation ----------------- Hotel reservations are available at the Tourist Office of Cologne Unter Fettenhennen 19 D-50667 Cologne Federal Republic of Germany Tel: +49 221 221-3330/-3338/-3348 Fax: +49 221 221 3320 We have also have made a note of a limited number of rooms for the participants of this conference at the Hotel Conti Bruesseler Str. 40-42 Tel: +49 221 252062 Fax: +49 221 252107 for a reduced price of DM 90 for the period March 1 - 4, 1994. If you are interested, please contact the hotel by February 10, 1994, with reference to the conference. Conference Registration and Questions ------------------------------------- For registration or questions concerning the conference please contact: University of Cologne Regional Computing Center Volker Winkelmann Robert-Koch-Str. 10 D-50931 Cologne Federal Republic of Germany Tel: +49 221 478 5526 Fax: +49 221 478 5590 E-Mail: Trends94@rrz.Uni-Koeln.DE Final Registration: February 15, 1994 Registration Form ----------------- Please fill in the following registration form and send it to the above address: 8<---------------------- snip ----------------- snap ------------------------ _________________________ _________________________ _________________________ Title, Name Institution/Organization Department _________________________ _________________________ _________________________ Street / P.O. Box Zip Code, City Country _________________________ _________________________ _________________________ Telephone Fax E-Mail Registration for "Trends in Computational Cluster Computing" I will attend O the tutorial O the conference I am O a member of an educational institution O a student (please include certification) __________________________________________ Signature Date 8<---------------------- snip ----------------- snap ------------------------ **** End of announcement for "Trends in Workstation Cluster Computing" **** -- ------------------------------------------------------------------------------- Volker Winkelmann Universitaet zu Koeln University of Cologne Regionales Rechenzentrum Regional Computing Center Wi@rrz.Uni-Koeln.DE Robert-Koch-Str. 10 Robert-Koch-Str. 10 Tel: +49-221-478-5526 D-50931 Koeln-Lindenthal D-50931 Cologne Fax: +49-221-478-5590 Bundesrep. Deutschland Federal Rep. of Germany ------------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edsr!jlb@uunet.UU.NET (Jeff Buchmiller) Subject: PCN templates for composition Keywords: PCN, divide-and-conquer, self-scheduling, domain decomposition Date: 7 Dec 93 16:46:20 GMT Reply-To: edsr!jlb@uunet.UU.NET Organization: Electronic Data Systems Are there any ftp sites (or helpful posters or E-mail senders) from which some standard PCN templates are available for composition techniques such as divide-and-conquer, self-scheduling, and domain decomposition? Thanks! -jlb- -- Jeff Buchmiller Electronic Data Systems R&D Dallas, TX jlb@edsr.eds.com ----------------------------------------------------------------------------- Disclaimer: This E-mail/article is not an official business record of EDS. Any opinions expressed do not necessarily represent those of EDS. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.arch From: andreas@didymus.rmi.de (Andreas Fassl) Subject: Re: Connect mouse to parallel? Organization: Klaus Kaempf Softwareentwicklung References: <1993Dec5.215447.12636@msuvx1.memst.edu> In <1993Dec5.215447.12636@msuvx1.memst.edu> hus@msuvx1.memst.edu writes: >Can I connect my mouse to the parallel port? How? >Thanks. Hi, the only solution I can mention is to get a serial-to-parallel-converter. Next trapdoor is the driver, I don't know any parallel mouse driver. If you are out of serial lines, the better solution is to buy an inexpensive additional serial board. regards Andreas -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + proGIS Softwareentwicklung, Simulationssysteme, Beratung + + Germany - 52064 Aachen, Jakobstrasse 181 + + E-Mail: andreas@didymus.rmi.de VOICE: (49) 241 403 446 + Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: anderson@cs.unc.edu (James H. Anderson) Subject: CFP(2nd): PODC94 Date: 7 Dec 1993 13:36:50 -0500 Organization: The University of North Carolina at Chapel Hill IMPORTANT: To reduce expenses, we have decided to not distribute the PODC Call for Papers and Conference Announcement via surface mail this year. So, please think twice before discarding this announcement. ----------**********----------**********----------**********---------- CALL FOR PAPERS 1994 ACM Symposium on Principles of Distributed Computing (PODC) The Thirteenth ACM Symposium on Principles of Distributed Computing (PODC), sponsored by ACM SIGACT and SIGOPS, will be held in Los Angeles, California, USA, August 14-17, 1994. Original research contributions are sought that address fundamental issues in the theory and practice of distributed and concurrent systems. Specially sought are papers that illuminate connections between practice and theory. Topics of interest include, but are not limited to: Distributed algorithms and complexity, Network protocols and architectures, Multiprocessor algorithms and architectures, Distributed operating systems -- principles and practice, Concurrency control and synchronization, Issues of asynchrony, synchrony, and real time, Fault tolerance, Cryptography and security, Specification, semantics, and verification. NEW CONFERENCE FORMAT: This year's conference will have two tracks of presentations. Long presentations will follow the standard format of recent years (25 minute talks), and will be accompanied by 10 page extended abstracts in the proceedings. It is understood that the research reported in these abstracts is original, and is submitted exclusively to this conference. In addition, brief presentations (10 minute talks) are invited as well. These presentations will be accompanied by a short (up to 1 page) abstract in the proceedings. Presentations in this track are understood to reflect early research stages, unpolished recent results, or informal expositions, and are not expected to preclude future publication of an expanded or more polished version elsewhere. (The popular ``rump'' session will still take place this year as well, although it is expected to be shorter given the new track.) SUBMISSIONS: Please send 12 copies of a detailed abstract (printed double-sided if possible) or a short abstract (1 page) with the postal address, e-mail address, and telephone number of the contact author, to the program chair: David Peleg IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, New York 10598 E-mail: peleg@watson.ibm.com To be considered by the committee, abstracts must be received by February 4, 1994 (or postmarked January 28 and sent via airmail). This is a firm deadline. Acceptance notification will be sent by April 15, 1994. Camera-ready versions of accepted papers and short abstracts will be due May 10, 1994. ABSTRACT FORMAT: An extended abstract (for long presentation) must provide sufficient detail to allow the program committee to assess the merits of the paper. It should include appropriate references and comparisons to related work. It is recommended that each submission begin with a succinct statement of the problem, a summary of the main results, and a brief explanation of their significance and relevance to the conference, all suitable for a non-specialist. Technical development of the work, directed to the specialist, should follow. Submitted abstracts should be no longer than 4,500 words (roughly 10 pages). If the authors believe that more details are essential to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the program committee. A short abstract (for brief presentation) should provide a much more concise description (up to 1 page) of the results and their implications. Authors should indicate in the cover letter for which track they wish to have their submission considered. In general, the selection criteria for long presentations are expected to be much more stringent than those for short ones. At the authors' request, a (10-page) submission may be considered for both tracks, with the understanding that it will be selected for at most one. (Such a request will in no way affect the chances of acceptance.) PROGRAM COMMITTEE: James Anderson (University of North Carolina), Brian Bershad (University of Washington), Israel Cidon (Technion and IBM T.J. Watson), Michael J. Fischer (Yale University) Shay Kutten (IBM T.J. Watson), Yishay Mansour (Tel-Aviv University), Keith Marzullo (University of California at San Diego), David Peleg (Weizmann Institute, IBM T.J. Watson and Columbia University), Mark Tuttle (DEC CRL), Orli Waarts (IBM Almaden), Jennifer Welch (Texas A&M University) CONFERENCE CHAIR: James Anderson, University of North Carolina. LOCAL ARRANGEMENTS CHAIR: Elizabeth Borowsky, UCLA. ----------**********----------**********----------**********---------- Jim Anderson anderson@cs.unc.edu PODC94 General Chair Computer Science Dept 919 962-1757 (voice) University of North Carolina 919 962-1799 (fax) Chapel Hill, NC 27599-3175 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: halstead@crl.dec.com (Bert Halstead) Subject: Re: Data-parallel languages (non-CM C*)? Organization: DEC Cambridge Research Lab References: <1993Dec7.155305.4663@hubcap.clemson.edu> In article <1993Dec7.155305.4663@hubcap.clemson.edu>, richards@wrl.EPI.COM (Fred Richards) writes: > Is any data-parallel language emerging as a standard, > much as PVM seems to be as a message-passing library? Your later mention of C* suggests that you might not be interested in this answer, but the closest thing to a "data-parallel language standard" is probably High Performance Fortran. (The specification can be obtained via anonymous FTP from titan.cs.rice.edu in the directory public/HPFF/draft.) Some early implementations are already available, and others will be coming soon. -Bert Halstead Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: berryman-harry@CS.YALE.EDU (Harry Berryman) Subject: Re: Data-parallel languages (non-CM C*)? Date: 7 Dec 1993 16:49:39 -0500 Organization: /homes/na/berryman/.organization References: <1993Dec7.155305.4663@hubcap.clemson.edu> In-Reply-To: richards@wrl.EPI.COM's message of 6 Dec 1993 16:38:47 GMT In article <1993Dec7.155305.4663@hubcap.clemson.edu> richards@wrl.EPI.COM (Fred Richards) writes: Is any data-parallel language emerging as a standard, much as PVM seems to be as a message-passing library? Does C*, or something *very* similar, run on any of the other MPP machines (Intel, nCube, MasPar, etc.) There is a standardization for message passing going on, and it's nearly complete. It's called MPI (Message Passing Interface). PVM has some popularity, but is hardly a standard. scott berryman yale university computer science department Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jmhill@duck.WPI.EDU (Jonathan M Hill) Subject: Performance Analysis Organization: Worcester Polytechnic Institute Keywords: Performance Analysis, Benchmarking, optimization Hello all; I have a neat program that I have been working hard to develop. I am fast approaching that phase that happens in nearly all development cycles, that is characterizing software performance, and optimization for maximum performance. This is rather nice, if I can find/develop a set of analysis tools I will be able to not only make a confident statement regarding performance, I will be able to use the tools to hunt down nasty bugs that slow things down! My application is targeted at the MasPar systems. Don't get the wrong impression. MasPar's MPPE is wonderful for all kinds of things, including performance analysis. I'm sorry to say that my bookshelf is sadly lacking and I could use a few good references, please. If anyone can make reference to any good text books/articles that present technigues/methods/concepts that are useful for performing just this sort of thing, I'm sure that in addition to myself, others would be interested as well Thanks in advance; Jonathan jmhill@ee.wpi.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mpourzan@cri.ens-lyon.fr (Makan Pourzandi) Subject: Exponential of the matrix Organization: Ecole Normale Superieure de Lyon Reply-To: mpourzan@cri.ens-lyon.fr Hi, I need to implement the exponential of a matrix for an algorithm which concerns the diffusion equations. I know a little about the parallel implementations of the exponential of the matrix. If there is something already out there, I would be very appreciative on some information. Does anyone know of the articles or technical reports ... concerning this subject ? Please e-mail me to mpourzan@lip.ens-lyon.fr Thank you Makan Pourzandi, Lab. LIP-IMAG ENS Lyon, 46 Allee d'Italie, 69364 Lyon Cedex 07 FRANCE Tel. (+33) 72 72 85 03 Fax (+33) 72 72 80 80 e-mail : mpourzan@lip.ens-lyon.fr Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jeremy@eik.ii.uib.no (Jeremy Cook) Subject: Paragon Study Guide available Organization: University of Bergen, Parallel Processing Laboratory Reply-To: Jeremy.Cook@ii.uib.no A self-teaching guide for the Intel Paragon is available from para//ab. >From the introduction: The objectives of this booklet are: to outline the topology of the Paragon supercomputer, to set out the salient features of computing with independent processors, each having its own data, to explain the details of the library of routines which provide support for parallel computing, to demonstrate some of the aspects of parallel computing by means of simple examples, to provide sufficient detail of the operating system to allow readers to create and implement their own program examples. The study guide is a compressed postscript file and can be retrieved from the parallab ftp server by anonymous ftp. host: ftp.ii.uib.no dir: pub/tech_reports file: pl-jc-93.3.Z Other studyguides for the MasPar/DECmpp and Intel iPSC computers are also available. -- --- Jeremy Cook, Senior Scientist \\Parallel processing lab/National MPP Centre \\ ,-. ,- ,- ,- / / ,- |-. //Dept. of Informatics, University of Bergen, // |-' `-` | `-` / / `-` `-' \\High Technology Centre, N-5020 Bergen,Norway\\ Jeremy.Cook@ii.uib.no //phone: +47 55 54 41 74 fax: +47 55 54 41 99// Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jabf@festival.ed.ac.uk (J Blair-Fish) Subject: General Purpose Parallel Computing Meeting Message-ID: Organization: Edinburgh University The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing A One Day Open Meeting with Invited and Contributed Papers 22 December 1993, University of Westminster, London, UK Invited speakers : Les Valiant, Harvard University Bill McColl, PRG, University of Oxford, UK David May, Inmos, UK A key factor for the growth of parallel computing is the availability of port- able software. To be portable, software must be written to a model of machine performance with universal applicability. Software providers must be able to provide programs whose performance will scale with machine and application size according to agreed principles. This environment presupposes a model of paral- lel performance, and one which will perform well for irregular as well as regu- lar patterns of interaction. Adoption of a common model by machine architects, algorithm & language designers and programmers is a precondition for general purpose parallel computing. Valiant's Bulk Synchronous Parallel (BSP) model provides a bridge between appli- cation, language design and architecture for parallel computers. BSP is of the same nature for parallel computing as the Von Neumann model is for sequential computing. It forms the focus of a project for scalable performance parallel architectures supporting architecture independent software. The model and its implications for hardware and software design will be described in invited and contributed talks. The PPSG, founded in 1986, exists to foster development of parallel architec- tures, languages and applications & to disseminate information on parallel pro- cessing. Membership is completely open; you do not have to be a member of the British Computer Society. For further information about the group contact ei- ther of the following : Chair : Mr. A. Gupta Membership Secretary: Dr. N. Tucker Philips Research Labs, Crossoak Lane, Paradis Consultants, East Berriow, Redhill, Surrey, RH1 5HA, UK Berriow Bridge, North Hill, Nr. Launceston, gupta@prl.philips.co.uk Cornwall, PL15 7NL, UK Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group (BCS PPSG) General Purpose Parallel Computing 22 December 1993, Fyvie Hall, 309 Regent Street, University of Westminster, London, UK Provisional Programme 9 am-10 am Registration & Coffee L. Valiant, Harvard University, "Title to be announced" W. McColl, Oxford University, Programming models for General Purpose Parallel Computing A. Chin, King's College, London University, Locality of Reference in Bulk-Synchronous Parallel Computation P. Thannisch et al, Edinburgh University, Exponential Processor Requirements for Optimal Schedules in Architecture with Locality Lunch D. May, Inmos "Title to be announced" R. Miller, Oxford University, A Library for Bulk Synchronous Parallel Programming C. Jesshope et al, Surrey University, BSPC and the N-Computer Tea/Coffee P. Dew et al, Leeds University, Scalable Parallel Computing using the XPRAM model S. Turner et al, Exeter University, Portability and Parallelism with `Lightweight P4' N. Kalentery et al, University of Westminster, From BSP to a Virtual Von Neumann Machine R. Bisseling, Utrecht University, Scientific Computing on Bulk Synchronous Parallel Architectures B. Thompson et al, University College of Swansea, Equational Specification of Synchronous Concurrent Algorithms and Architectures 5.30 pm Close Please share this information and display this announcement The British Computer Society Parallel Processing Specialist Group Booking Form/Invoice BCS VAT No. : 440-3490-76 Please reserve a place at the Conference on General Purpose Parallel Computing, London, December 22 1993, for the individual(s) named below. Name of delegate BCS membership no. Fee VAT Total (if applicable) ___________________________________________________________________________ ___________________________________________________________________________ ___________________________________________________________________________ Cheques, in pounds sterling, should be made payable to "BCS Parallel Processing Specialist Group". Unfortunately credit card bookings cannot be accepted. The delegate fees (including lunch, refreshments and proceedings) are (in pounds sterling) : Members of both PPSG & BCS: 55 + 9.62 VAT = 64.62 PPSG or BCS members: 70 + 12.25 VAT = 82.25 Non members: 90 + 15.75 VAT = 105.75 Full-time students: 25 + 4.37 VAT = 29.37 (Students should provide a letter of endorsement from their supervisor that also clearly details their institution) Contact Address: ___________________________________________ ___________________________________________ ___________________________________________ Email address: _________________ Date: _________________ Day time telephone: ________________ Places are limited so please return this form as soon as possible to : Mrs C. Cunningham BCS PPSG 2 Mildenhall Close, Lower Earley, Reading, RG6 3AT, UK (Phone 0734 665570) -- -- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: andym@dcs.qmw.ac.uk (macfarlane) Subject: shared memory vs message passing Message-ID: Sender: usenet@dcs.qmw.ac.uk (Usenet News System) Organization: Computer Science Dept, QMW, University of London X-Newsreader: TIN [version 1.1 PL8] Date: Wed, 8 Dec 1993 15:26:07 GMT Can anyone send me references for shared memory systems against message passing system on distributed memorys ie advantages/disadvantages Andrew MacFarlane QMW College University of London Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: shankar@crhc.uiuc.edu (Shankar Ramaswamy) Subject: help with function parallel codes Date: 8 Dec 1993 16:02:49 GMT Organization: Center for Reliable and High-Performance Computing, University of Illinois at Urbana-Champaign i am looking for references to algorithms/programs that have functional parallelism. let me clarify what i mean by the term - functional parallelism exists when two or more loops/nested loops of the algorithm/program can execute in parallel. a simple example would be a multiply of two complex matrices - the real part of the product can be computed in parallel with the imaginary part of the product, i.e. : /* loop 1 */ for i=1,n for j=1,n for k=1,n creal[i][j]+=areal[i][k]*breal[k][j] - aimag[i][k]*bimag[k][j] end end end /* loop 2 */ for i=1,n for j=1,n for k=1,n cimag[i][j]+=areal[i][k]*bimag[k][j] + aimag[i][k]*breal[k][j] end end end loop 1 and loop 2 can execute independently. if any others are interested in such algorithms/program, send me mail - i will collect all references sent to me and post them on the net. thanks, -- shankar ramaswamy phone : (217)244-7168 {office} (217)367-7615 {home} email : shankar@crhc.uiuc.edu fax : (217)244-5685 ******************************************************** * * * Proud Member of the PARADIGM team * * * * PARADIGM : The Mother of all FORTRAN compilers * * FORTRAN : The Mother of all programming languages * * * ******************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cary@esl.com (Cary Jamison) Subject: Re: Data-parallel languages (non-CM C*)? Organization: ESL, Inc. A TRW Company References: <1993Dec7.155305.4663@hubcap.clemson.edu> In article <1993Dec7.155305.4663@hubcap.clemson.edu>, richards@wrl.EPI.COM (Fred Richards) wrote: > > Is any data-parallel language emerging as a standard, > much as PVM seems to be as a message-passing library? > > Does C*, or something *very* similar, run on any of the > other MPP machines (Intel, nCube, MasPar, etc.) Can't say that it's an emerging standard, but HyperC seems promising. It is running on workstation clusters (usually built on PVM), CM, MasPar, and is being ported to others such as nCube. Cary Jamison EEEEE SSS L Excellence Cary Jamison E S L Service cary@esl.com EEEE SSS L Leadership Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hickman@cwis.unomaha.edu (Betty Hickman) Subject: Sequent parallel programming reference needed Message-ID: Sender: news@news.unomaha.edu (UNO Network News Server) Organization: University of Nebraska at Omaha I'm trying to get a copy of the latest edition of Sequent's Guide to Parallel Programming book. The first two editions were published in-house I believe, but I don't know who published the latest edition. Any info would be appreciated. Betty Hickman hickman@felix.unomaha.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.super,comp.parallel From: cfreese@super.org (Craig F. Reese) Subject: Business Applications on Supercomputers.... Organization: Supercomputing Research Center (Bowie, MD) Hi all. It seems that supercomputer manufacturers are looking more and more towards the commercial/business areas as their markets. I'm curious. What are the types of computations typically done in these areas? I'm interested in better understanding how a business supercomputer might be architected differently than a scientific one (if there really is any significant difference). Anyway... If anyone has stumbled across any good references or kernel sets that apply to business applications, please post (or Email me). Many thanks, Craig P.S. I'm looking for something more than a sorts, I/O, lots of disk type answer. I realize that this is an open ended question and not easily answered in any detail. *** The opinions expressed are my own and do not necessarily reflect *** those of any other land dwelling mammals.... "The problem ain't what we don't know; it's what we know that just ain't so Either we take familiar things so much for granted that we never think about how they originated, or we "know" too much about them to investigate closely." ----------------- Craig F. Reese Email: cfreese@super.org Institute for Defense Analyses/ Supercomputing Research Center 17100 Science Dr. Bowie, MD 20715-4300 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stefan@SSD.intel.com (Stefan Tritscher) Subject: Re: Connect mouse to parallel? Keywords: chuckle, :-) Organization: Supercomputer Systems Divison, Intel Corp. References: <1993Dec5.215447.12636@msuvx1.memst.edu> In article <1993Dec5.215447.12636@msuvx1.memst.edu>, hus@msuvx1.memst.edu writes: |> Can I connect my mouse to the parallel port? How? |> Thanks. No - try comp.serial. --Stefan :-) -- Stefan Tritscher Intel Corporation | e-mail: stefan@esdc.intel.com European Supercomputer Development Center (ESDC) | phone: +49-89-99143-307 Dornacher Str. 1, 85622 Feldkirchen b. M., FRG | fax: +49-89-99143-932 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Sven-Bodo Scholz Subject: looking for a functional language on distributed memory machines We are interested in functional languages which are tailormade for running "real world" numerical applications on distributed memory systems (e.g. nCUBE, workstation cluster). So our questions are: - are there any implementations of functional languages which already support distributed memory machines (e.g. SISAL ???) - if so, where can we find references on state of the art literature? - does anybody do respectively who is doing research in that field? The emphasis of our research lies on "high performance computing" and "large" applications. Hence systems supporting arrays, so called data parallelism, and destructive updates are of major interest. SISAL seems to be quite appropriate but we were not able to find a version which supports distributed memory architectures. We are interested in any hints or advices via email or postings. Thanks in advance Sven-Bodo -- Sven-Bodo Scholz University of Kiel email: sbs@informatik.uni-kiel.d400.de Department of Computer Science Phone: +49-431-5604-52 Preusserstr. 1-9, 24105 Kiel Fax: +49-431-566143 Germany Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hcl@fai.com (Han Lung) Subject: Information on Paragon Keywords: Paragon performance Sender: hcl@fai.com Organization: Fujitsu Systems Business of America References: <1993Dec6.143410.4616@hubcap.clemson.edu> Date: Thu, 9 Dec 1993 17:08:19 GMT Apparently-To: uunet!comp-parallel This is a request for information. In Jack Dongarra's Linpack benchmark report, a 1872-processor Paragaon is listed as having a peak performance of 94 GFLOPS, which translates to 50 MFLOPS/processor. The i860/XP microprocessor used in the Paragon, however, has a peak speed of 75 MFLOPS (1 multiply/2 clocks + 1 add/clock @ 50 MHz). We believe that the Paragon should be rated at 140 GFLOPS. (In other words, Paragon's effiency should be lower by one-third.) Jack Dongarra contends that, since most applications do a multiply and add together, they cannot make use of 1 add/every other clock, thus 50 MFLOPS (1 multiply/2 clocks + 1 add/2 clocks = 2 ops/2 clocks = 1 op/clock). However, an HPCwire report dated Oct. 28, 1993 states: > 2180) SANDIA BREAKS 100 GFLOPS MARK ON 1,840-NODE PARAGON SYSTEM 44 Lines > Albuquerque, N.M. -- Scientists at Sandia National Laboratories have > achieved record-setting performance for the second time in as many months, > as they recorded 102.050 GFLOPS on a double-precision complex LU ^^^^^^^^^^^^^^ > factorization running on their full 1,840-node Intel Paragon supercomputer. The figures imply that each node runs at 55.5 MFLOPS (102 GFLOPS/1840). This exceeds the per-node peak performance of the 1872-node Paragon listed in Table 3 of the Linpack report dated Nov. 23, 1993 (94 GFLOPS/1872 = 50 MFLOPS/node). Unless the clock rate of the Paragon used at Sandia is substantially faster than 50 MHz, the peak rating which appears in Table 3 cannot be correct. Does anyone know, or can anyone find out, the exact processor count and the clock rate for the Paragon installed at Sandia? As an aside, I don't see how one can get more than 50 MFLOPS from a processor, even for a complex multiply/add (2 +'s for add, 4 x's and 2 +'s for multiply = 4 x's and 4 +'s, which takes 8 clocks, so on average 1 op/clock, which gives 50 MFLOPS. Any ideas on how to make use of the additional add? Thanks for any help. ===================================== Han Lung Supercomputer Applications Department Fujitsu Systems Business of America 5200 Patrick Henry Drive Santa Clara, CA 95054 Tel: (408) 988-8012 x263 Fax: (408) 492-1982 E-mail: hcl@fai.com ===================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.sequent,comp.parallel From: jmc3@engr.engr.uark.edu (Dr. James M. Conrad) Subject: Steal your idle CPU cycles Sender: netnews@engr.uark.edu (NetNews Administrator) Nntp-Posting-Host: jconrad.engr.uark.edu Reply-To: jmc3@engr.engr.uark.edu Organization: University of Arkansas College of Engineering Date: Thu, 9 Dec 1993 19:58:02 GMT Content-Type: text Content-Length: 606 My two sources of Sequent machines have dried up. I need to finish some code measurements for potential conference papers. Does anyone out there have some spare cycles they would let a poor Assistant Professor have?? Thanks!!!!!!! ------------------------------------------------------------------------- James M. Conrad, Assistant Professor jmc3@jconrad.engr.uark.edu Computer Systems Engineering Department jmc3@engr.engr.uark.edu University of Arkansas, 313 Engineering Hall, Fayetteville, AR 72701-1201 Dept: (501) 575-6036 Office: (501) 575-6039 FAX: (501) 575-5339 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jet@nas.nasa.gov (J. Eric Townsend) Subject: Re: Data-parallel languages (non-CM C*)? Sender: news@nas.nasa.gov (News Administrator) Organization: NAS/NASA-Ames Research Center References: <1993Dec7.155305.4663@hubcap.clemson.edu> <1993Dec9.142826.1645@hubcap.clemson.edu> "cary" == Cary Jamison writes: cary> Can't say that it's an emerging standard, but HyperC seems cary> promising. It is running on workstation clusters (usually built cary> on PVM), CM, MasPar, and is being ported to others such as cary> nCube. Is this similar to Babel's HyperTasking stuff he did while at Intel? -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: anshu@eng.umd.edu (Anshu Mehra) Subject: Parallel Computing in Distributed systems Date: 9 Dec 1993 22:37:11 GMT Organization: University of Maryland, College Park Nntp-Posting-Host: raphael.eng.umd.edu Originator: anshu@raphael.eng.umd.edu I have to run a set of independent programs (by independent, I mean that there in no transfer of data from one program to another) on the set of Spark/Sun Stations. The advantage of running these programs on differnt Sun stations is time saving. Can anyone help me with this problem. What language should I use to write the code, which will execute all these programs concurrently on different machine. BTW, these programs are coded in C. Any comments will be highly appreciated. Thanks Anshu (anshu@eng.umd.edu) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Sender: netnews@engr.uark.edu (NetNews Administrator) To: comp-parallel@relay1.uu.net Newsgroups: alt.sources.wanted,comp.sources.wanted,comp.parallel,comp.parallel.pvm Path: cseg01.engr.uark.edu!cab3 From: cab3@cseg01.engr.uark.edu (Chad A. Bersche) Subject: Wanted - Traveling Salesman Code Summary: Need code for Traveling Salesman, preferrably parallel Keywords: Parallel, salesman, traveling, wanted Sender: netnews@engr.uark.edu (NetNews Administrator) Nntp-Posting-Host: cseg01.engr.uark.edu Organization: University of Arkansas Date: Fri, 10 Dec 1993 06:17:10 GMT Content-Type: text Content-Length: 1082 I am trying to locate some code which solves the Traveling Salesman problem. I am attempting to do this on any of the following parallel processing machines: PVM, N-Cube, or CM-2. I would even settle for sequential code at this point, just to have a place to start from. I'm needing to get started on this as soon as possible (I've planned on it this weekend), and I'll settle for any pointers to existing code that may exist. Algorithms are also welcome. I'm in the middle of finals here, so I would prefer e-mail responses so I don't have to browse through the newsgroups right now. I'd be GLAD to summarize anything I receive if there is sufficient interest in it. Thanks in advance for any assistance! -- cab3@engr.engr.uark.edu -- Chad A. Bersche, Systems Administrator -- cab3@uafhp.uark.edu -- Department of Computer Systems Engineering -- cab3@vnet.ibm.com -- IBM Customer Solution Center Network Support -- Look, would it save you a lot of time if I just gave up and went mad now? Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Simon.N.Smith@cm.cf.ac.uk (Simon McIntosh-Smith) Subject: guessing loop iterations? Message-ID: <1993Dec10.093157.3312@cm.cf.ac.uk> Sender: news@cm.cf.ac.uk (Network News System) Organization: University of Wales College of Cardiff, Cardiff, WALES, UK. Date: Fri, 10 Dec 1993 09:31:55 +0000 I'm performing analysis on conditional loops like the following: while () { } and I want to make a guess at how many times a loop like this might iterate. Can anyone point me towards recent work on this? It looks like the kind of thing performance analysers would do. One piece of information I have so far is that conditional loops performing convergant calculations will commonly perform around 50 iterations [1]. Any more gems like this would be greatly appreciated. Summary to the net will follow if there is interest. Thanks for your help, Simon [1] @article{gabber:p3c, author = {Amir Averbuch and Eran Gabber and Amiram Yehudai}, journal = {{IEEE} Software}, month = mar, number = {2}, pages = {71--81}, title = {Portable, Parallelising Pascal Compiler}, volume = {10}, year = {1993} } Simon N. McIntosh-Smith, PhD candidate | Email : Simon.N.Smith@cm.cf.ac.uk Room M/1.36 Department of Computing Maths | Phone : +44 (0)222 874000 University of Wales, College of Cardiff | Fax : +44 (0)222 666182 PO Box 916, Cardiff, Wales, CF2 4YN, U.K. | Home : +44 (0)222 560522 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: brunzema@DBresearch-berlin.de (Martin Brunzema) Subject: Re: Data-parallel languages (non-CM C*)? Organization: Daimler Benz Systemtechnik References: <1993Dec7.155305.4663@hubcap.clemson.edu> richards@wrl.EPI.COM (Fred Richards) writes: >Is any data-parallel language emerging as a standard, >much as PVM seems to be as a message-passing library? >Does C*, or something *very* similar, run on any of the >other MPP machines (Intel, nCube, MasPar, etc.) Hatcher from the university of New Hampshire (?) has developed Dataparallel-C for the Intel iPSC/2 and the nCube some time ago. Dataparallel-C bases on an former version of the C* language (with domains). It has also been ported to a meiko, but its far away from beeing a standard. -- __o one Martin Brunzema _`\<_ car brunzema@DBresearch-berlin.de (_)/(_) less Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Frank.Kemper@arbi.informatik.uni-oldenburg.de (Frank Kemper) Subject: LINDA - What has happened ? Organization: University of Oldenburg, Germany Date: Fri, 10 Dec 1993 10:22:34 GMT I am a Student, who has to work on the theme LINDA. The base for my work is an article from 1988, about the LINDA Coprocessor. And now I am interested in the State of Art of Linda. It would be nice, if anyone could help me. My E-Mail Adress ist : Frank.Kemper@Informatik.Uni-Oldenburd.DE Frank Kemper Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ucapcdt@ucl.ac.uk (Christopher David Tomlinson) Subject: UK address for MASPAR Organization: Bloomsbury Computing Consortium Could anybody please supply me with the UK address for MASPAR Computing Corp. I have just had a letter returned from the following address; First Base, Beacontree Plaza, Gillette Way, READING RG2 OBP Thanks for any help Chris Tomlinson (c.tomlinson@ucl.ac.uk) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: zz@cs.purdue.edu (Zhongyun Zhang) Subject: Address of PGI? Organization: Purdue University, West Lafayette, IN Would any one tell me the contact info for the company named PGI that developed i860 compilers for Intel iPSC/Delta/Paragon? Thanks. Tony Z. Zhang Dept of Computer Sciences Purdue University West Lafayette, IN 47907 Email: zz@cs.purdue.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tuecke@mcs.anl.gov (Steve Tuecke) Subject: Predoctoral Appointments at Argonne Organization: Argonne National Laboratory, Chicago, Illinois Predoctoral Positions Mathematics and Computer Science Division Argonne National Laboratory Argonne National Laboratory invites outstanding candidates to ap- ply for two predoctoral positions in the Mathematics and Computer Science Division. The purpose of the predoctoral appointment is to offer outreach opportunities for developing the full potential (i.e., Ph.D. de- gree) of underutilized protected groups. A divisional mentor will assist participants in preparing a thesis proposal, enrolling in a doctoral program, and in returning to Argonne for subsequent summers, thesis appointments, and postdoctoral appointments. The predoctoral appointees will participate in an exciting research and development project designing and implementing new parallel programming languages and tools. The first appointee will focus on the design and implementation of a compiler for Fortran M, a parallel extension to Fortran. Ex- perience in compilers is desirable. The second appointee will focus on the design and implementation of a parallel, threaded run-time system called Nexus, used as a compiler target for Fortran M and other parallel languages. Ex- perience in threads and parallel programming is desirable. Candidates should have knowledge and experience in the following areas: programming with C, C++, and Fortran in a Unix environ- ment; parallel programming tools, programming language design; compilers; and parallel applications. A good knowledge of com- puter architecture and computer systems organization is pre- ferred. The successful candidates will create, maintain, and sup- port high-quality, imaginative software on a wide variety of com- puters, including networks of workstations and massively parallel supercomputers, such as our 128-node IBM 9076 Scalable POWER- Parallel System 1 and a 512-processor Intel Paragon. Applicants must have received their M.S. not more than three years prior to the beginning of the appointment. Applications must be addressed to Walter McFall, Box mcs-predoc, Employment and Placement, Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, and must include a resume' and the names and addresses of three references. For further information, con- tact Steven Tuecke (tuecke@mcs.anl.gov; 708-252-7162). Initial appointments are for a period of one year, but may be ex- tended in increments of one year or less, to a maximum of two years. U.S. citizens and/or permanent residents are eligible to participate in this program. Argonne is an affirmative action/equal opportunity employer. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ebert@cs.umbc.edu (Dr. David Ebert) Subject: CFP: 1994 Symposium on Volume Visualization Date: 10 Dec 1993 13:56:42 -0500 Organization: U. Maryland Baltimore County Computer Science Dept. 1994 Symposium on Volume Visualization October 17-18, 1994 Washington, DC Call for Participation Following our three successful meetings (the Chapel Hill '89, San Diego '90, and Boston '92 Workshops on Volume Visualization), this fourth meeting will provide the opportunity for demonstrations of new developments in this evolving area. Scientists from all disciplines involved in the visual presentation and interpretation of volumetric data are invited to both submit and attend this Symposium. The Symposium is sponsored by ACM-SIGGRAPH and the IEEE Computer Society Technical Committee on Computer Graphics. This Workshop will take place during the week of October 17-21, 1994 at the Sheraton Premiere at Tyson Center Hotel in Washington DC area, in conjunction with the Visualization '94 Conference. Six copies of original material should be submitted to the program co-chairs on or before March 31, 1994. Authors from North America are asked to submit their papers to Arie Kaufman. All others are to submit their papers to Wolfgang Krueger. Suggested topics include, but are not limited to: * Volume visualization of unstructured and irregular grids. * Parallel and distributed volume visualization. * Hardware and software systems. * Validation and control of rendering quality. * Volume segmentation and analysis. * Management, storage, and rendering of large datasets. * User interfacing to volume visualization systems. * Acceleration techniques for volume rendering. * Fusion and visualization of multimodal and multidimensional data. * Visualization of non-scalar volumetric information. * Modeling and realistic rendering with volumes. * Discipline-specific application of volume visualization. Papers should be limited to 5,000 words and may be accompanied by an NTSC video (6 copies, please). The accepted papers will appear in the Symposium Proceeding that will be published by ACM/SIGGRAPH and will be distributed to all SIGGRAPH Member "Plus". Program Co-chairs: Arie Kaufman Wolfgang Krueger Computer Science Department Dept. of Scientific Visualization, GMD-HLRZ State University of New York P.O. Box 1316, Schloss Birlinghoven Stony Brook, NY 11794-4400 D-5205 Sankt Augustin 1 GERMANY Telephone: 516-632-8441/8428 Telephone: +49 (2241) 14-2367 Fax: 516-632-8334 Fax: +49 (2241) 14-2040 Email: ari@cs.sunysb.edu Email: krueger@viswiz.gmd.de Symposium Co-chairs: Roni Yagel Holly Rushmeier Dept. of Computer Science Rm. B-146, Bldg. 225 The Ohio State University NIST 2036 Neil Av. Columbus, OH 43210 Gaithersburg, MD 20899 Telephone: 614-292-0060 Telephone: 301-975-3918 Fax: 614-292-2911 Fax: 301-963-9137 Email: yagel@cis.ohio-state.edu Email: holly@cam.nist.gov Program Committee: Nick England - University of North Carolina, Chapel Hill Pat Hanrahan - Princeton University Marc Levoy - Stanford University Bill Lorensen - General Electric Co. Nelson Max - Lawrence Livermore National Labs Greg Nielson - Arizona State University Sam Uselton - CS Corp - NASA Ames Jane Wilhelms - University of California at Santa Cruz Symposium Committee: David Ebert - University of Maryland, Baltimore County Todd Elvins - San Diego Supercomputer Center Larry Gelberg - AVS -- -- Dr. David S. Ebert, Computer Science Department, University of Maryland, -- --- Baltimore County; 5401 Wilkens Ave., Baltimore, MD USA 21228-5398 ------- ------ ebert@cs.umbc.edu or ..!{att,pyramid,killer}!cs.umbc.edu!ebert -------- ------------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: gkj@jacobs.esd.ornl.gov (Gary K. Jacobs) Subject: JOB - Computational Scientist Message-ID: <1993Dec10.201138.12025@ornl.gov> Organization: Oak Ridge National Laboratory Computational Scientist (High-Performance Computing in Environmental Technologies) The Center for Computational Sciences (CCS) at the Oak Ridge National Laboratory (ORNL), a recognized leader in multidisciplinary research and development, is seeking a Computational Scientist for the development and implementation of computational methods on massively parallel computer architectures to help solve environmental problems of critical importance to industry. The position is part of a multidisciplinary team of national laboratory and university collaborators working on "Grand Challenges" related to environmental areas such as groundwater remediation and global change. The successful applicant will be able to work with scientists, programmers, and vendors to create effective implementations of existing codes on state-of-the-art parallel supercomputers employing new hardware and software. Where existing codes are inadequate, the researcher will be encouraged to develop innovative algorithms and software tools to simulate environmental systems. The CCS currently houses a Kendall Square Research computer and two Intel Paragons. The position requires a PhD (or MS and equivalent combination of education and experience) in a quantitative science (hydrology, geochemistry, computer science, mathematics, or engineering). Application programming experience on massively parallel processors and FORTRAN programming proficiency are required. Familiarity with C and UNIX or OSF/1 based computers is desired, along with experience in development and use of visualization software and spatial analysis. Demonstrated interpersonal skills, oral and written communication skills and strong personal motivation are essential. ORNL is a multi-purpose research facility managed by Martin Marietta Energy Systems for the U.S. Department of Energy. ORNL offers a competitive compensation and benefits package, including relocation. For immediate consideration, send your resume to: J. K. Johnson Oak Ridge National Laboratory Dept. NetNews P.O. Box 2008 Oak Ridge, Tennessee 37831-6216 ORNL is an equal opportunity employer committed to building and maintaining a diverse workforce. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.arch From: bart@asiago.cs.wisc.edu (Bart Miller) Subject: New tech reports on parallel measurement tools Message-ID: <1993Dec10.204910.13584@cs.wisc.edu> Organization: University of Wisconsin, Madison -- Computer Sciences Dept. Two new technical reports are available via anonymous from the Paradyn Parallel Program Performance Tool project at UW-Madison. The first paper is about our techniques and implementation for dynamic (on the fly) instrumentation of programs. The second paper is about collecting and mapping performance data to high-level parallel languages, with a description of a prototype implementation of these ideas for CM Fortran. An earlier paper on the Performance Consultant part of Paradyn is also available. These files are found on grilled.cs.wisc.edu (128.105.36.37): technical_papers/dyninst.ps.Z Dynamic instr paper technical_papers/nv.ps.Z High-level langugae tool paper technical_papers/w3search.ps.Z Performace Consultant paper Also in this directory is a file called "READ_ME", listing titles, authors, and citations for each paper. File "ABSTRACTS" contains this information, plus abstracts for each paper. Questions on the Paradyn system can be directed to paradyn@cs.wisc.edu. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Gordon D B Cameron Subject: Re: I/O on parallel machines Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre References: <1993Dec3.132928.28740@hubcap.clemson.edu> Hi, (Sorry for the inaccuracy in ftp directory on the references for parallel I/O here at EPCC. The address is /pub/pul at ftp.epcc.ed.ac.uk, and the parallel file i/o utility is known as PUL-GF. The files start with the gf- prefix, and are compresssed PostScript. I hope this is of use, -G.) -- ~ Gordon Cameron ( BSG & Visualisation ) Phone: +44 31 650 5024 (Rm. 2259) ~ Edinburgh Parallel Computing Centre e|p Email: gordonc@epcc.ed.ac.uk ~ The University of Edinburgh c|c 'So far so good, so now so what' Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.sys.super,comp.parallel Path: epcc.ed.ac.uk!gordonc From: Gordon D B Cameron Subject: Re: I/O on parallel machines Keywords: parallel I/O, dartmouth archive, example codes Sender: UseNet News Admin Organization: Edinburgh Parallel Computing Centre References: Date: Fri, 10 Dec 1993 20:09:17 GMT Apparently-To: comp-parallel@uknet.ac.uk Hi, Here at EPCC, people have been investigating utilities for parallel IO for some time. You may want to look at the /ftp/pul directory at : ftp.epcc.ed.ac.uk -- ~ Gordon Cameron ( BSG & Visualisation ) Phone: +44 31 650 5024 (Rm. 2259) ~ Edinburgh Parallel Computing Centre e|p Email: gordonc@epcc.ed.ac.uk ~ The University of Edinburgh c|c 'So far so good, so now so what' Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: "Brian D. Alleyne" Subject: Performance figures for the Intel Paragon... Newsgroups: comp.parallel,comp.sys.super Measurement times on the Intel Paragon... Would anyone have the following data? >From the time that you decide to seed a message, how long does it take (software overhead) to send a message to the network. (ie. this does not include the latency of the network, just the time to launch a message). What is the transfer time for a random communication. (ie, every processing node picks another at random, and sends a message there at the same time. Want the time for all messages to get to their destinations for a small message ~ 64 bytes or less). What is the transfer rate for a random communication. (ie, every processing node picks another at random, and sends a message there. Message should be of the order of 64kbytes). If you do, please also tell me on what size machine it was run on. Thanks a million in advance. Brian ------------------------------------------------------------- alleyne@ics.uci.edu If anything can go wRong, it just did. ------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jet@nas.nasa.gov (J. Eric Townsend) Subject: Re: Information on Paragon In-Reply-To: hcl@fai.com's message of Thu, 9 Dec 1993 17:08:19 GMT Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: boxer.nas.nasa.gov Organization: NAS/NASA-Ames Research Center References: <1993Dec6.143410.4616@hubcap.clemson.edu> <1993Dec10.135148.11153@hubcap.clemson.edu> Date: Fri, 10 Dec 1993 22:06:39 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov "hcl" == Han Lung writes: hcl> The figures imply that each node runs at 55.5 MFLOPS (102 hcl> GFLOPS/1840). This exceeds the per-node peak performance of the hcl> 1872-node Paragon listed in Table 3 of the Linpack report dated hcl> Nov. 23, 1993 (94 GFLOPS/1872 = 50 MFLOPS/node). Unless the It is possible to get higher speeds out of a processor than the linpack numbers. Unlikely, difficult, and worth noting when it happens, but it can be done. Dave Scott supposedly got 38.some-odd MFLOPS out of i860XR's (peak 40MFLOP) on the ipsc/860 by handcoding damn near everything in site. (Maybe someone from SSD could comment on this?) hcl> Does anyone know, or can anyone find out, the exact processor hcl> count and the clock rate for the Paragon installed at Sandia? I believe that the 1872 node figure is substantially correct. hcl> As an aside, I don't see how one can get more than 50 MFLOPS from hcl> a processor, even for a complex multiply/add (2 +'s for add, 4 They don't have to. It's possible that 1872 node figure is actualy node boards, not CPU's. ie: they could actually have 3244 i860's (1872*2) operational, which would mean that they get 50MFLOPS/node, 25MFLOPS/cpu. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mbryant@mbvlab.wpafb.af.mil (Michael L. Bryant) Subject: Parallel Processing Benchmarks Followup-To: comp.parallel Date: 10 Dec 1993 22:39:34 GMT Organization: USAF Nntp-Posting-Host: mbryant.mbvlab.wpafb.af.mil Do any "accepted" benchmarks exist for parallel processing systems which measure things like FLOPS/MIPS at the node level, FLOPS/MIPS at the system level, node communication, I/O, etc.? Does a document exist somewhere on the net which lists the performance of several machines using these benchmarks? I realize that benchmark performance will only be a rough indicator of a specific applications performance. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.sys.super,comp.parallel From: jet@nas.nasa.gov (J. Eric Townsend) Subject: MPP and paging (was Re: Why doesn't anybody talk about the NEC super's? In-Reply-To: utsumi@ayame.mfd.cs.fujitsu.co.jp's message of 10 Dec 93 14:01:59 Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: boxer.nas.nasa.gov Organization: NAS/NASA-Ames Research Center Date: Fri, 10 Dec 1993 23:16:10 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov "utsumi" == Teruo Utsumi writes: utsumi> To shed a little light on the subject, I'll describe briefly utsumi> address translation for the processing element of our VPP500. utsumi> I believe the basic ideas are the same for the NEC SX-3 and utsumi> Hitachi S-3800. (BTW, can anyone describe how MPPs deal w/ utsumi> paging and protection?) Paging? What is paging? If we're talking about paging in and out of executables and data between RAM and mass storage, then... TMC: The CM-2 doesn't page. It's an attached array processor. The CM-5 under CMOST 7.[1,2] does not page. It has virtual addressing (ie: all addresses are not between 0 and 32M), but it does page in/out or swap. All programs/data are loaded into core memory fore execution. I'm not sure if any sort of run-time page relocation occurs, so it is possible that addresses are mapped from virtual to static at load-time. Multiple processes can run on a node, so I suspect that this isn't the case. The CM-5 uses sparc chips. There's no reason it couldn't page, other than performance and development costs. Intel: The ipsc/860 is much like the CM-2 in that it is an attached processor more than a standalone parallel system. Its 'operating system' is as much as operating system as MS-DOS is. (Which is to say, that it isn't in the strict definition of the words. It's a program loader.) The Paragon, running OSF1/AD does full unix paging on each node. It also runs a full UNIX server on each node (good-bye memory :-). It's a simple thought-problem to imagine what happens if your application occupies the network space between the file system and another application that needs to page heavily. The i860XP can apparently have 1MB page or 1KB page tables. I don't know if anyone has used the 1MB page tables, tho. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.sys.super,comp.parallel From: murphyx@superior.ccs.carleton.ca (Michel Murphy) Subject: Re: Business Applications on Supercomputers.... Followup-To: comp.sys.super,comp.parallel Sender: news@cunews.carleton.ca (News Administrator) Organization: Carleton University X-Newsreader: TIN [version 1.2 PL0] References: <1993Dec9.142854.1807@hubcap.clemson.edu> Date: Sat, 11 Dec 1993 03:31:58 GMT Apparently-To: comp-parallel@math.waterloo.edu Craig F. Reese (cfreese@super.org) wrote: : It seems that supercomputer manufacturers are looking more and more : towards the commercial/business areas as their markets. : I'm curious. What are the types of computations typically done in : these areas? If you look at the Cray ftp site (ftp.cray.com) you'll see that Merrill Lynch just bought a Cray YMP. They're using it for portfolio analysis (risk/return). Given the large amount of statistics and large datasets, lots of problems seem suited for a vector machine. I'm sure that a vector machine would drastically reduce Monte-Carlo simulations. For the past two years, I've been working with the canadian Department of Finance (similar to the US Office of the Budget) and came across several problems which would have been very nicely handled by a vector machine. We had to write loops and most of the models took few hours to run. When the Minister wanted something by 4pm, it would have been nice to crank the entire model on a cray instead of submitting an incomplete analysis of an economic problem. : I'm interested in better understanding how a business supercomputer : might be architected differently than a scientific one (if there really : is any significant difference). The business/financial/economics problem that I have came across seem to be indead very similar to scientific ones. (I've worked on both) If you'd like to discuss this any further, you can email me at murphyx@ccs.carleton.ca. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: edmond@tripos.com (Edmond Abrahamian) Subject: PhD work off-campus? Message-ID: Summary: off-campus PhD research anywhere? Keywords: PhD off-campus-research Organization: Tripos Associates, Inc. Date: Sat, 11 Dec 1993 06:54:25 GMT Are there universities that allow PhD candidates to work on their research away from the campus? In my particular case, I am unable to completely quit my job (for monetary reasons) to pursue postgraduate work, yet my job offers a particularly good environment for research in the areas of molecular modelling, molecular mechanics, 3-d compound searching, computer graphics, parallel processing, algorithmics, and artificial intelligence, among possible others. I am interested in working towards a PhD in computer science. I am seeking a program that would allow me to do doctoral research work off-campus. Are universities in Europe more receptive to this idea than those in the U.S.? In particular, I hear that course work is not mandatory there. Can anyone help me at all on this subject? I apologize if this posting is not appropriate for this newsgroup. thanks, Edmond --------------------------------------------------------------------------- Edmond Abrahamian voice +1 314 647 8837 ext 3281 Tripos Associates fax +1 314 647 9241 1699 S. Hanley Rd. Suite 303 email tripos.com!edmond@wupost.wustl.edu St.Louis MO 63144 USA --------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: vip@kvant.munic.msk.su (Andrei I. Massalovitch) Subject: Was: COCOM... Thanks a lot - hundred responses ! Date: Sat, 11 Dec 1993 11:31:57 GMT X-Mailer: BML [MS/DOS Beauty Mail v.1.25] Reply-To: vip@kvant.munic.msk.su Organization: NII Kvant Keywords: COCOM Sender: news-server@kremvax.demos.su Summary: Was: COCOM... Thanks a lot - hundred responses ! Message-ID: Dear Colleagues, Thanks a lot for the one hundred responses to my letter "COCOM is dead !..." I'll try to answer all. Sorry for the delay. Thanks again. Andrei Massalovitch Parallel Systems Division S&R Institute KVANT, Moscow, Russia -- Andrei Massalovitch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: vip@kvant.munic.msk.su (Andrei I. Massalovitch) Subject: Is the performance/price of Alpha really good ? Date: Sat, 11 Dec 1993 11:27:15 GMT X-Mailer: BML [MS/DOS Beauty Mail v.1.25] Reply-To: vip@kvant.munic.msk.su Organization: NII Kvant Keywords: Alpha, price/performance Sender: news-server@kremvax.demos.su Summary: Is the performance/price of Alpha really good ? Message-ID: Dear Colleagues, the discussion about DEC Alpha benchmarks is very interesting, but what about price/performance of Alpha workstations ? As I think, Pentium and may be 486 look more tasty ? I should be obliged if somebody would forward me any information about concrete models of Alpha Workstations and your opinion about this side of problem. Thanks in advance. Andrei Massalovitch Parallel Systems Division S&R Institute KVANT, Moscow, Russia -- Andrei Massalovitch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: thibaud@kether.cgd.ucar.edu (Francois P. Thibaud) Subject: search for benchmark Keywords: benchmark, programming model, parallel Sender: news@ncar.ucar.edu (USENET Maintenance) Organization: Climate and Global Dynamics Division/NCAR, Boulder, CO Date: Sun, 12 Dec 1993 00:24:59 GMT Apparently-To: comp-parallel@ncar Hello Everybody ! On behalf of Bruce Curtis (National Energy Research Supercomputer Center (NERSC), Livermore, CA), I am searching for a rare pearl: an application program written (coded) using the 3 following programming models: * shared memory (e.g. Fortran 77 with parallel Fortran preprocessor directives, like Cray's "fmp"), * message passing (e.g. Fortran 77 with PVM-2/3), * data parallel (e.g. Fortran 90, HPF, TMC's CM Fortran, using array constructs). This code would be used for benchmarking within the NERSC procurement procedure for a (big) MPP. The program has to be: * well written and documented (mandatory, Fortran 6 aligned on 7th column without any comment is not OK), * public domain (your classified warp drive's coil simulation code is probably not OK, even though I would be interested in looking at it 8=), * operational NOW with the 3 programming models (mandatory), * reasonably short (less then 10k lines). Reward: my eternal gratitude, NERSC's benchmarking team's eternal gratitude, all the future users of NERSC's MPP's eternal gratitude (NO formula has been found to convert "eternal gratitude" to CPU hours 8=). If you happen to know somebody who may have such a rare pearl, please forward this message. If you happen to own such a rare pearl and are willing to share it, please respond by E-mail to "thibaud@ncar.ucar.edu". Kind Regards ! Francois P. Thibaud Organization: University of Maryland at College Park (UMCP) and The National Center for Atmospheric Research (NCAR) Address: 1850, Table Mesa Drive; PO Box 3000; Boulder CO 80307-3000 USA Phone: (+1)303-497-1707; Fax: (+1)303-497-1700; Room 505, North tower Internet: thibaud@ncar.ucar.edu (thibaud@ra.cgd.ucar.edu) Approved: parallel@hubcap.clemson.edu Path: bounce-back From: sirosh@cs.utexas.edu (Joseph Sirosh) Newsgroups: comp.sys.super,comp.parallel Subject: T3D computation to communication Followup-To: comp.sys.super Organization: CS Dept, University of Texas at Austin One of the ways to estimate how well-balanced an MPP is between computation and communication is to estimate the average no. of floating point operations that can be done by one processor in the time to send and receive a packet (including startup latency for the packet) between two processors. This is a good rule of the thumb, assuming no hot spot problems, and not too intensive communication requirements. Can anyone provide a rough estimate for the CRAY-T3D in this respect? Let us assume we use CRAY's version of PVM for communication. It would be good to get the figures for processor to processor communication, as well as broadcast (since special hardware is present for broadcast). Thanks --Joseph Sirosh PS: Figures for other machines (say KSR 1/2, Paragon, CM-5) should also be interesting, for comparison. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.super,comp.parallel From: tez@chopin.udel.edu (Timothy E Zickus) Subject: Re: Business Applications on Supercomputers.... Message-ID: Sender: usenet@news.udel.edu Organization: University of Delaware References: <1993Dec9.142854.1807@hubcap.clemson.edu> cfreese@super.org (Craig F. Reese) writes: > >It seems that supercomputer manufacturers are looking more and more >towards the commercial/business areas as their markets. > >I'm curious. What are the types of computations typically done in >these areas? Craig, My company, Quantum Development, has recently announced a version of our Quantum Leap problem solving workbench for IBM's SP1 hardware. In a nutshell, the application allows users to quickly build complex business models and subsequently perform global optimization and constraint satisfaction on the resulting model to support a decision-making process. The types of applications that can make use of this type of product run the whole spectrum from financial to manufacturing. Our problem solving engine is very CPU-hungry, of course, so any hardware that lets us distribute the computations across many processors (either within the same box, or across a LAN) is ideal. For example, we also recently announced our SMP OS/2 version as well, which allows us to make use of desktop servers for smaller-scale problems than on the SP1 class server. I won't invite flames by posting it in this forum, but I will be glad to send you (& anyone else who is interested) a copy of our SP1 press release by e-mail. best regards, - Tim -- -------------------------------------------------------------------------- Love is instinctive | Happiness is a good cup | tim zickus Hate is learned | of espresso... _____ | mgr. of development ----------------------------------- | |) | quantum development zickus@udel.edu (302) 798-0899 | |___| | claymont, delaware -------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.parallel From: rick@cs.arizona.edu (Rick Schlichting) Subject: Kahaner Report: Summary of Japanese High Performance Computing, 1993 Followup-To: comp.research.japan Date: 12 Dec 1993 20:47:37 -0700 Organization: University of Arizona CS Department, Tucson AZ [Dr. David Kahaner is a numerical analyst on sabbatical to the Office of Naval Research-Asia (ONR Asia) in Tokyo from NIST. The following is the professional opinion of David Kahaner and in no way has the blessing of the US Government or any agency of it. All information is dated and of limited life time. This disclaimer should be noted on ANY attribution.] [Copies of previous reports written by Kahaner can be obtained using anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports.] From: Dr. David K. Kahaner US Office of Naval Research Asia (From outside US): 23-17, 7-chome, Roppongi, Minato-ku, Tokyo 106 Japan (From within US): Unit 45002, APO AP 96337-0007 Tel: +81 3 3401-8924, Fax: +81 3 3403-9670 Email: kahaner@cs.titech.ac.jp Re: Summary of Japanese High Performance Computing, 1993 12/08/93 (MM/DD/YY) This file is named "j-hpc.93" ABSTRACT. A brief summary of developments in Japanese High Performance Computing. This report extracts a variety of material from earlier reports, and provides a synthesis of these developments as of the end of 1993. High performance computing means both traditional supercomputing, and the newly emerging field of parallel computing. In Japan, both areas are under heavy scrutiny and development in both the private and public sector. Here we survey the current trends. The three large Japanese supercomputer vendors, NEC, Fujitsu, and Hitachi have been actively building high-end shared memory supercomputers. The emphasis has been to capitalize on Japanese excellence in technology even admitting that they are behind in software. At this date, each of these companies have existing shared memory systems whose single processor peak performance exceeds anything produced in the US. The three Japanese supercomputer vendors provide single processor peak performance of 5-8GFLOPs, Cray Research Inc's (CRI) newest C-90 has 1GFLOP peak. Multiprocessor shared memory systems have peak performance that is proportional to the number of processors. Multiprocessor systems from the Japanese vendors have 2-4 processors, while a C-90 can have 16. Nevertheless, peak performance on a 16 processor C-90 is 16GFLOPs, whereas peak on Hitachi's S3800 with 4 processors is 32GFLOPs. The S3800 also illustrates the point that excellent technology can leapfrog--two years ago Hitachi was hardly even mentioned as a player in the supercomputer field, today it has the world's fastest single processor. Very high peak performance is obtained partly by using excellent technology, and partly by using large numbers of vector pipes. The Hitachi S3800 has a 2ns clock and NEC's SX-3R has a 2.5ns clock, both using only silicon technology. The C-90 clock is 4ns; CRI will not have a 2ns silicon machine for several years. Peak performance is directly proportional to clock speed. Large numbers of vector pipes means that 8 or even 16 of the same floating point operation can be done simultaneously. This requires that arrays of floating point numbers must be segmented into 8 or even 16 parts for maximum performance. Thus, problems with very long vector lengths are most suitable candidates for high performance. Peak and real performance may be very different and users are becoming sophisticated at not accepting a single number as a performance measure and are demanding benchmarks based on their specific applications. Most applications obtain a small fraction of peak and might even be dominated by scalar arithmetic, I/O, or other operations that are not suitable for speedup by vectorization. One reason why Cray has been successful in bidding against Japanese computer manufacturers is that they have a reputation for balanced performance from all parts of their system. Nevertheless, there is usually some application that can take advantage of any existing performance, and if there is enough determination these applications can benefit from architectures that are not appropriate for other problems. Fujitsu, for example, has been targeting both oil simulation and weather as particular application areas for their hardware, with some success. They also have started to develop applications in chemical fields. NEC and Hitachi also have some particular application fields for their shared memory systems. There is also the issue of whether multiprocessor shared memory systems are to be run in multi-user or single-user mode. In the former case, a user gets access to a single processor. At the moment, Japanese shared memory supercomputers are mostly operating as one user on one processor. In a computer center with a multi-processor system, giving users a single processor can result in high center throughput, but it does not give users access to the complete hardware. Software aspects are very different in these cases too. Users of Japanese systems are only beginning to experiment with multiprocessors. Not surprisingly, their initial experiences are frustrating and there are many anecdotal stories about poor performance. But performance will get better and ultimately may be comparable, in percentage of peak, to CRI's figures. As part of this year's economic stimulus package the Japanese government has agreed to purchase and install 15 supercomputers (some of which may be parallel computers) by 1 April 1994 at government research labs. US products will be significantly represented, although some may be "integrated" by Japanese companies. Naturally, the positioning and bidding war for these systems is intense. (At the moment, CRI, Maspar, and Thinking Machines have bids that were accepted.) For example, there are rumors that Hitachi's S3800 will be the company's last traditional shared memory supercomputer. Hitachi will have several years to ponder this, and successful bids for some of these 15 systems could significantly influence them. Similarly, many Japanese feel that Fujitsu's current shared memory VP2000/VPX is not competitive any longer, with NEC, Hitachi, and CRI making shared memory systems with 3-6 times more total peak performance. Although Japanese supercomputers have a reputation for hardware reliability, application software is still lagging. With a smaller installed base it is difficult to convince Western companies to invest in converting well known packages. Also, there is a general sense that large shared memory systems are low volume sellers (although high revenue fraction for the hardware vendors), and perhaps a dying breed. Sometimes, software porting is done by the Japanese vendors directly, and almost always a Japanese interface is provided. Pre and post-processing packages are also common. Since the existence of these packages is an important selling point, the Japanese have invested significant financial resources and talent in software development, and this also provides a learning experience for their software engineers as well as providing a variety of new capabilities. However, CRI still maintains a significant lead because of the number of applications that run on their systems. For most application areas, there are fully Japanese-written packages, especially in structural analysis, electromagnetics, and to a growing extent, computational fluid dynamics (CFD). Because of the application bent within the Japanese science and technology community, computational simulation is heavily used and this has led to new software and enhancements to existing software. Industrial use of engineering software is high, and applications to wind load on buildings, air flow around automobiles, shock waves caused by high speed trains in tunnels, quenching of molten steel, heat flow while making flat panel glass, etc., are fairly common. As one example of brand-new Japanese software we mention that over the past four years the US$10M industry-funded Alpha-flow project which has developed a large package for use in computational fluid dynamics CFD (single phase flow). This is now being sold by the individual participant companies. For example, Hitachi and Fujitsu have announced that Alpha-flow will be available on their supercomputers. This does not yet appear to be a threat to Western software developers, but might be if its capabilities are expanded. There are also some unique software products for Japanese supercomputers. For example, Fujitsu has a product called Fortran KR, which combines the traditional Fortran programming language with object-oriented extensions, fuzzy logic and other seemingly exotic capabilities. Fujitsu claims that KR is being used (in Japan) in process control applications. KR, while not a major product, illustrates an important point, that in Japan, high performance computing is not entirely focused on traditional numerical applications. While these are still the bread and butter of supercomputer runs, a higher fraction of Japanese applications have no, or very little, floating point compared to the situation in the West. Examples are in circuit layout, reasoning, etc. There is also (relatively) less interest in developing applications than in running existing packages. In the US, supercomputing began in earnest at the national labs where vendors supplied bare bones hardware and left it for creative scientists to generate software. Although most US vendors admit that they need to emphasize applications over technology, there is still a strong tendency in the US to focus on the code development level for applications. In Japan where supercomputers moved rapidly into industry, integration of packages into work flow is emphasized. There is a widespread recognition in Japan that in the future, distributed memory parallel processing (MPP) will be an important, perhaps the dominant, computing mode. Until very recently, there have been no general purpose Japanese parallel computers, and US computer vendors have been successful at selling products in Japan. Currently there are about 60 US commercial systems running or being installed in Japan. Many are small, but there are also larger systems such as CM-5, Paragon, KSR, NCR, NCube, etc. I am told that CRI has already sold a few of their new T3D parallel systems; this will probably be successful because T3D runs existing CRI application software and because of the company's strong reputation. By now, there are several Japanese commercial parallel computers, the two most notable being Fujitsu's AP 1000 and NEC's Cenju-3. Hitachi has announced plans to use HP's PA-RISC architecture to design and build a one thousand node MPP, and a more detailed product announcement is likely very soon. The AP has a 2-D mesh machine and Cenju-3 uses a multistage network. Fujitsu is well ahead of other Japanese companies in providing access to its parallel computers. The company has set up a parallel processing research facility near Tokyo, with two 64cpu systems and one 1024 system (the maximum AP configuration), and allows researchers wordwide to log in and access the equipment for general science; at their recent parallel processing workshop it was announced that there were over 600 registered uses of this facility including 11 groups from Europe and 13 from North America. (There are also several APs in Japan outside Fujitsu, for example at Kyoto University.) An AP has been at the Australian National University (ANU) for over two years, and extensive collaborative work is performed with Fujitsu. The company is also working toward an agreement for similar collaborations with Imperial College London. NEC says that their Cenju-3 is "being evaluated". I am not aware of any Cenju-3 systems running outside the company. At the recent Supercomputing '93 meeting in the US, NEC displayed visuals and charts about Cenju-3. None have yet been sold in Japan, although there has been one 16 processor sale to the Dutch aviation and space research lab NLR, with delivery by mid 1994. Both the AP and Cenju are third or fourth generation systems. They have evolved from specialized parallel processors. AP was originally a graphics engine called CAP; Cenju was designed to solve transient circuit problems (SPICE) and currently uses VR4400 RISC processors (maximum configuration is 256 processors each of which can run at up to 75MHz and has 50MFLOP peak performance). AP also has the capability to accept a vector processor for each cpu (Fujitsu's 50MHz Micro VP) on an internal bus. In addition there are definite plans, early in 1994 to upgrade the AP's basic cpu from a 25 MHz SPARC to a 50MHz Viking chip set, which will significantly improve the basic cell performance. Integer performance on the AP, such as sorting, is already comparable to the CM-5's, although floating point is much worse. Several APs, which have a reputation of being exceptionally reliable, are likely to be sold to users at Japanese companies and university labs (with and without the new cpu) who are not concerned with its lack of commercial software. With the new processor, if Fujitsu begins to sell the AP in earnest it is possible that NEC will respond by placing their own single chip 200MFLOP vector processor (called VPP) in Cenju. In fact, the history of Japanese parallel processing is quite different from that of the US. In the US, work on the ILLIAC IV was begun in the 1960s by ARPA. Things began later in Japan, but there has been work on special purpose parallel systems since the 1970s (at least). But, until very recently, there were no general purpose parallel computers. The strategy seemed to be to build special systems for clearly identified needs, and then attempt to expand them to larger classes of users. Both AP and Cenju appear to be systems that may reach the market to compete with Western distributed memory processors such as CM-5 or Paragon. However, I do not sense a strong commitment to these systems as real products, perhaps because of the lack of parallel applications or high cost of converting existing software. I believe that the companies are honestly unsure of whether they should push forward with vector/parallel computing, massively parallel computing, or some combination of both. For example, Fujitsu's official word on the AP is that "this machine is offered in selected markets to research centers." Similarly, NEC markets Cenju-3 "as a tool for pioneering researchers with a vision for the future". Both companies mention the use of their parallel machines as super-servers on a network. Fujitsu's 2nd AP Parallel Processing Workshop, which occurred in Nov 1993 at the company's Kawasaki facility suggested several things to me. (1) I believe that Fujitsu has really taken the lead in Japanese parallel computing; by opening up the AP to outside users they have gained an enormous amount of experience and visibility. Their workshop reported an impressive number of applications as well as the usual confidence building exercises. (2) The Fujitsu--ANU collaboration is a big win for both sides. ANU has become a player in parallel computing research; Fujitsu has obtained not only the general experience of this group and a window into Western thinking, but more explicitly several very specific pieces of useful software (including an extended utilities kernel, a parallel file system, a nice text retrieval package, system, language and compler tools, and a variety of math library software). Recently, several of Australia's best computer scientists, who have done excellent numerical work on the AP, are now turning their attention to the VPP500 (see below), so Fujitsu will derive some benefit there too. At the workshop, both senior company management and ANU attendees were very enthusiastic. (3) There is competition between VPP and AP groups inside Fujitsu, as these machines originate from different divisions. Based on the workshop I assume that the AP side is feeling very good now. It is very possible that another kind of parallel system will lead Fujitsu and NEC's efforts into parallel computing. Both companies have seen that, at the moment, shared memory supercomputers are successful at solving real problems and users like that "computational model". One excellent way to make parallelism easy is to give each processor as much performance as possible, use relatively few processors, and make their interconnections direct. If this could be done cost effectively, customers would congregate in droves. The shared memory concept that Western parallel computers are supporting is an indication that most users are much more comfortable with that approach than with distributed memory. If the latter is necessary, users would prefer a network that is simple to understand and also efficient. In my opinion, if Japanese vendors attempt to compete with Western companies by developing parallel processors using existing commodity processor technology they had better add some very unique characteristics, because of the heavy edge the West has in software development. Thus it makes sense to push technological strength as a value added feature. Fujitsu's VPP500 is a good example (not to be confused with NEC's VPP chip). A system with up to 222 water cooled cpus (9.5ns) using some gallium arsenide, each of which is itself a vector processor capable of 1.7GFLOPs, based upon the Fujitsu VP2000 processor (also similar to, but faster than the older VP400 processor). When these processors are connected by a high speed crossbar switch allowing every processor equal access to every other processor, we get a system with peak of almost 340GFLOPs. Even a single cpu can provide significant performance (4cpus is the minimum configuration). A 140cpu system, somewhat of a prototype of the VPP500, is installed at the National Aerospace Lab near Tokyo and now holds the current performance record of about 120GFLOPs on a section of NS3D code, and about 70GFLOPs on more realistic applications. Several VPP500 systems with a handful of processors each have been sold to sites in Europe; in Japan, systems will be installed at Tsukuba University's Computing Center and MITI's new Angstrom Lab. A full VPP500 is large (each processor is about 1cubic foot) and expensive (one very rough estimate of the 140cpu NAL machine was over US$100M), latency (time through the system from one processor to another) and startup overhead of the network are not well documented -- few objective performance evaluations are available. Also there is as yet very little parallel software. (A Fortran compiler that is compatible with both the VPP500 and the AP1000 is just now being released.) But the VPP500 exploits the capabilities of the Japanese electronics companies to build devices, and it can be potent for specific applications. I would not be surprised to see NEC adopt a similar approach, for example using SX-3 like processors, connected by a high speed crossbar. There is no doubt that they can do this, but at the same time it is more likely that they will concentrate on saleable rather than record setting systems. To satisfy the trends toward downsizing and cost reduction I also suspect that we will see SX-3 or VP2000 compatible, air cooled processors, perhaps enhanced by using new CMOS technology, higher integration, more advanced packaging, etc. Smaller systems also make sense in other ways, because many of the recent shared memory supercomputer sales have been low-end systems. Of course, companies would much rather sell a few large systems because the profit per unit is greater but the trend everywhere is toward downsizing. Another advantage to present-day supercomputer companies of emphasizing compatibility is that software that is already running on a vector supercomputer should be easier to port, and could even be code compatible in some cases. The variety of parallel computer architectures that are available in the US are also being prototyped in Japan, but outside of universities, Japanese users have little interest in writing software. For example, the purchase of general purpose Fortran libraries (such as IMSL) is very much less common than in the West. Most sites will buy a system, assuming it has reasonable price-performance, only if commercial applications are running. Thus the big winner will be the vendor with the largest application base. Another point to note about Japanese parallel computer activities, is the lack of "startup" companies, such as Thinking Machines, NCube, KSR, etc. The computer giants, NEC, etc., are naturally concerned about taking business away from themselves, and while they have many internal development projects, they do not seem rushing to productize them. Instead, the role of startup seems to be taken by other companies that are attempting to move into the computer market. Several examples illustrate the point. NKK is the Western name for Nippon Kokan or Nippon Steel Tubing company, a US$12B company that is known for steel and other metal products, certainly not for computers. But NKK has recently teamed with Convex in the US and is sending upwards of a dozen software developers to Texas to learn about the latter's new parallel processor and eventually to develop commercializable software for it. NKK has significant computer expertise because of automation within steel mills, and is hoping to capitalize on it to develop new business directions just as steel sales flatten. One can see this in other ways too. NKK has just announced as a product a computer entirely developed in-house (XT&T), that combines a keyboard, a pen-based interface with their own ASIC chip for handwriting recognition, and an X-windows terminal interface. NKK's ambitious corporate goal is to have no more than 50% of its business in steel by the end of the decade. A related example is the large, government funded Real World Computing (RWC) project, a ten-year US$50-60/year (plus salaries) effort to develop computing technology to solve ambiguous and conflicting problems the way humans do daily on a routine basis. The legal entity for RWC is a Partnership, which has set up a lab with both an Intel Paragon and a Thinking Machines CM-5. Among the partners are the usual NEC, Hitachi, etc., but also the Japan Iron and Steel Foundation, which through its members is searching for new opportunities with a new technology. Two other examples of new developments outside of the traditional computer companies are efforts by Sanyo and Sharp. Sanyo is now selling Cyberflow, a compact 64 cpu parallel processor (also a mesh or torus). Each processor is a modest 10MFLOP, but is a single chip dataflow processor of which 16 fit on one board. The current design allows for up to 1024 processors. Sanyo had an earlier version of this too, called EDDEN, although it never made it out of the lab. Sharp has a joint project with Mitsubishi Electric and has developed what they term a Data-Driven Processor, similar to a dataflow design, which has 20MFLOP peak performance. As many as 1024 of these can be connected. The company is envisaging specialized applications in image processing, digital signal processing etc. Finally, Matsushita, the giant consumer electronics company (also known as National and Panasonic) has a parallel processing project too. This is a system with an unusual design that was created originally for CFD at Kyoto University. It has undergone several name changes, ADENART, ADENA, and finally OHM, but now has 256 100MFLOP processors. This was announced as a product in 1992 but there do not appear to have been any external sales. The economic downturn has definitely hurt this effort, but at the moment it is hanging on. A recent study found dozens of parallel processing projects at Japanese companies. Most will not reach the street and virtually all begin with the goal of satisfying a special application need. (If we include neural network hardware and other embedded systems the count would be much higher.) But each also provides a learning function to the engineers and software developers. When company treasuries were overflowing it was easy to justify these projects. Now it is tougher, and certainly productizing them will occur very cautiously. In the meantime Japanese users are waiting for applications to arrive. Any discussion of high performance computing in Japan must touch on the topic of networking. We mention this here briefly. A more complete evaluation requires a separate report. Networking activities, such as the Internet, were slow to begin in Japan and still are not nearly as common as in the US. Nevertheless, many technology organizations now have some electronic mail capability. Other networking services such as remote access to computer, etc., are not widely available. Further, experiments with very high speed networks are uncommon or just beginning. The Japanese look at US developments in the national information infrastructure with a certain degree of envy, hoping that their own government will make similar investments in R&D. US networking products are heavily used, CISCO, UltraNet, HIPPI etc., and there are few competitive Japanese equivalents. There is one area of networking in which the Japanese are significantly ahead of the US, and this will ultimately become very important commercially. The Japanese telephone company, NTT, is the world's largest company with operating revenues of over US$60B. NTT has a major commitment to installing optical cable to every home in Japan by the year 2015 (fiber to the home, FTTH). Associated with that will be the availability of broadband ISDN (B-ISDN) service. Totally digital, ISDN will allow voice, image, sound, etc., to be carried simultaneously. Already, narrowband ISDN (N-ISDN), 64K bits per second, is widely available. In fact there are thousands of N-ISDN capable coin operated public telephones throughout Japan. These have both an analog jack for ordinary phone modems as well as a digital jack for the higher speed digital N-ISDN connections. NTT was a somewhat optimistic in their belief that the public would embrace ISDN immediately. But it is coming, although perhaps somewhat more slowly than the company would like. And NTT is extremely serious about this effort. In fact, they have changed their corporate slogan to VI&P for Visual, Intelligent, and Personal communications, focusing on the services that will be available with the new ISDN. Ultimately these will be determined by creativity in the marketplace, but NTT has been experimenting with a variety of fascinating ideas. These include personal telephone numbers attached to people rather than places, 50 channels of HDTV, super HDTV, interactive music lessons, multi-media fax, automatic map generation and fax-back for input telephone numbers, 3-D video display without glasses, multimedia video conferencing, etc. At the heart of the ISDN system are very high speed switches (ATM switches). NTT and many Japanese (as well as Western) companies are developing these; Japanese are among the most advanced. (NTT, Fujitsu, and Siemens plan to have a public net ATM switch by next year.) Parallel computing plays an important part in this technology, and NTT is actively at work on parallel machines. For example, COSINE-III is a 3D-mesh multiprocessor with 64cpus that uses bi-directional free-space optical interconnects. As with other Japanese projects this one is also focused on a particular application, communication. NTT has also established a 2.4Gigabit per second optical link between two laboratories about 100km apart. A Cray2 at one site and a Convex at the other are linked using FDDI and HIPPI to study the problems associated with such high speed links. NTT and other companies believe that the distinction between computing and communication is fast disappearing, and the fields must be integrated. (NEC has also changed its logo to C&C, Computing & Communication. It is engaged in a joint venture with Toshiba and a small cable TV company to experiment with multimedia broadcasting.) Thus, leaders in one technology will automatically be leaders in the other. However, Japanese regulations do not currently permit telephone companies to engage in broadcasting, although in the US telephone, cable TV and entertainment businesses are rapidly joining hands to provide multimedia services. So it is not yet clear who will be the leader in the new information age. ----------------------------END OF REPORT---------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel To: comp-parallel@uunet.UU.NET From: pc94@cscs.ch Subject: PC94 Second Announcement Message-ID: <1993Dec13.105638.6173@cscs.ch> Keywords: High Performance Computing Sender: pc94@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico, CH-6928 Manno Date: Mon, 13 Dec 1993 10:56:38 GMT --------------------------------------------------------------------------- PC'94 (PHYSICS COMPUTING '94) The 6th Joint EPS-APS International Conference on Physics Computing Lugano, Switzerland, Palazzo dei Congressi August 22-26, 1994 (Second Announcement) --------------------------------------------------------------------------- Jointly organized by the Swiss Scientific Computing Center (CSCS), Manno, Switzerland, belonging to the Swiss Federal Institute of Technology, Zurich (ETHZ), the EPS Interdisciplinary Group on Computational Physics and by the American Physical Society, Division of Computational Physics. Sponsored by EPS-APS, ETHZ, Computers in Physics, NEC Scientific Program --------------------------------------------------------------------------- PC'94 will give a detailed overview on the newest research results and developments in computational methodology for academia and industry, including invited papers, contributed papers, poster sessions, tutorials and vendor exhibitions. The purpose of the conference is to bring together researchers interested in innovative approaches in computational physics. Special emphasis will be given to algorithmical and high- performance computer implementation issues. Tutorial sessions organized by leaders in their respective fields will be held on the first conference day. Invited Speakers and their presentation topics (August 23-26, 94) --------------------------------------------------------------------------- Molecular Dynamics W. Andreoni, IBM Research Centre, Ruschlikon, CH L. Colombo, University of Milan, I V. Stepanyuk, Lomonosov State Univ. Moscow, Russia Physics Education C. Rebbi, Physics Dept., Boston Univ., USA Chaos and Dynamical Systems V. Demmel, MPI fur Extraterrestrische Physik, Garching, D Mesoscopic Physics H. deRaedt, Univ. of Groningen, NL E. Kaxiras, Dept. of Physics, Harward Univ., USA Electronic Structure of Molecules and Materials L. Ixaru, Inst. of Physics, Univ. of Bucharest, R You Zhou, Physics Inst., Univ. of Zurich, CH Plasmas U. Schwenn, MPI fur Plasma Physik, Garching, D Fluids and Turbulence T. Poinsot, CERFACS, Toulouse, F S. Lanteri, Inria, F Climate Modeling J. Hunt, Weather Office, Bracknell, GB Parallel Computation in Physics K. Bowler, Dept. of Physics, Univ. of Edinburgh, GB G. Meurant, CEA, Dept. Mathematiques Appliquees, F M.C.Payne, Dept. of Physics, Univ. of Cambridge, GB Monte Carlo and Critical Phenomena K. Binder, Physics Inst., Johannes Gutenberg-Univ., Mainz, D I. Morgenstern, Physics Inst., Univ. of Regensburg, D Industrial Applications B. Larrouturou INRIA, Sophia-Antipolis, F P. Sguazzero IBM ECSEC Center, Rome, I Self-Organization and Growth Phenomena M. Peyrard, Faculty of Science, Univ. of Lyon, F J. Kertesz, Bit, Budapest, H Cellular Automata and Lattice Gases B.Chopard, Parallel Computing Group, Univ. of Geneva, CH Numerical Methods I.M.Sobol, Inst. Math. Modelling, Moscow, Russia S. Obara, Dept. of Chemistry, Kyoto Univ., J Tutorials (Monday, August 22, 1994) --------------------------------------------------------------------------- 1 - Parallel Computation (1 day) 2 - Visualization Tools and Techniques (1 day) 3a - Introduction to Finite Elements Methods (1/2 day) 3b - Structured and Unstructured Adaptive Grids (1/2 day) 4a - Wavelets and Fourier Analysis (1/2 day) 4b - Electronic Structure Calculations (1/2 day) 5a - Introduction to Neural Networks (1/2 day) 5b - Symbolic Computation (1/2 day) Paper Submission (before February 28, 1994) --------------------------------------------------------------------------- Original Papers should be limited to 4 pages and submitted before February, 28, 1994. Formatting details will be given in due time. Please submit 4 copies of all materials. Accepted papers will be included as posters or as oral presentations. Papers will only be accepted if at least one of the authors has registered. In contrast with the arrangements at previous conferences in the Physics Computing series, the papers will be printed prior to PC'94 in a proceedings volume, that will be distributed to participants at the beginning of the conference. Registration --------------------------------------------------------------------------- Registration fee: (1) EPS member and affiliated societies ONLY: I am sending Sfr. 370.- (Sfr. 420.- after May 1994) to the account (*). (2) I am sending Sfr. 400.- (Sfr. 450.- after May 1994) to the account (*). (*) Account: "PC'94" Nr. JE-150.761.0 Swiss Bank Corporation (SBS) CH-6982 Agno, Switzerland At the conference there is the possibility to buy daily tickets at Sfr. 200.- /day. The registration fee covers the access to the conference and the proceedings book of the conference. Refund of the registration fee (less 10% administration costs) will only be granted if notification of cancellation has reached the conference secretariat before August 1, 1994. Lunches: Sfr. 25.- /day, payable at the conference Tutorials: (Students 50%) Sfr. 200.- /half day Sfr. 300.- /day A tutorial takes place only if at least 10 participants have registered. Registration Form ------------------------------------------------------ >-8 ---------------- Family Name: ____________________________________________________________ First Name: _____________________________________________________________ Organization: ___________________________________________________________ Mailing Address: ________________________________________________________ ZIP/City/State _________________________________________________________ Country: ________________________________________________________________ Phone: ____________________________ Fax: ________________________________ Email: __________________________________________________________________ Tutorials: 1 ___ 2 ___ 3a ___ 3b ___ 4a ___ 4b ___ 5a ___ 5b __ Date:______________________ Signature:___________________________________ ---- 8-< ------------------------------------------------------------------- PC '94 Scientific Advisory Committee --------------------------------------------------------------------------- Dr. Ralf Gruber (chairman) CSCS, Manno, CH Dr. Robert Allan SERC, Daresbury, GB Dr. David Anderson Univ. of California, Livermore, USA Prof. Alfonso Baldereschi EPFL, Lausanne, CH Dr. Ernesto Bonomi CSR4, Cagliari, I Prof. Roberto Car Univ. of Geneva, CH Prof. Robert De Groot Univ. of Nijemegen, NL Prof. Lev Degtyarev Keldysh Institute, Moscow, Russia Prof. Geerd Diercksen Max-Planck-Institut, Gerching bei Munchen, D Prof. Wolfgang Fichtner ETH Zentrum, Zurich, CH Prof. Roland Glowinski CERFACS, Toulouse, F Dr. Frederick James CERN, Geneva, CH Dr. Richard Kenway Edinburgh Univ., GB Prof. Peter Meier Univ. of Zurich, CH Dr. Jaroslav Nadrchal Academy of Sciences, Praha, CZ Prof. Risto Nieminen Helsinki University, Espoo, Finland Dr. Elaine Oran Naval Research Lab., Washington DC, USA Dr. Jacques Periaux Dassault Aviation, Saint-Cloud, F Dr. Adnan Podoleanu Polytechnical Institute of Bucharest, Russia Dr. Sauro Succi IBM ECSEC, Roma, I Dr. Toshikazu Takada NEC Corp. Tsukuba, Ibaraki, J Dr. Marco Tomassini CSCS, Manno, CH Prof. Tanias Vicsek Eotvos University, Budapest, H Hotel Information --------------------------------------------------------------------------- Hotel reservations may be arranged through the Palazzo dei Congressi in Lugano (Tel: +41/91/21 4774, Fax: +41/91/22 0323). Early registration is advisable in order to arrange proper accommodation. Social Programme --------------------------------------------------------------------------- A social programme is in preparation. An accompanying person's programme will be organized at the conference. Additional Informations --------------------------------------------------------------------------- The final program of the conference will be sent in June 1994. Inquiries about the Conference. PC'94 Centro Svizzero di Calcolo Scientifico CH-6928 Manno (Switzerland) Tel: +41/91/508211 Fax: +41/91/506711 E-mail: pc94@cscs.ch Official Carrier --------------------------------------------------------------------------- Swissair Crossair Approved: parallel@hubcap.clemson.edu Path: bounce-back From: lih@cs.columbia.edu (Andrew "Fuz" Lih) Newsgroups: comp.parallel,comp.parallel.pvm Subject: PVM and Threads? Followup-To: comp.parallel.pvm Organization: Columbia University Department of Computer Science Sender: lih@cs.columbia.edu Summary: Can we use PVM and Threads? Keywords: pvm parallel threads g++ Dear Netters, We are attempting to use PVM on a Sun in conjunction with the public-domain implementation of Pthreads by Frank Mueller of Florida State University. We have gotten them to interoperate in small test cases, however once we use large message buffers, we get hangs, and dropped return messages from the processes spawned by the pvmds. Can anyone give pointers, or suggestions as to whether PVM is thread safe. Rough multiple choice answers: a. ABSOLUTELY, it should be thread safe b. POSSIBLY, but there are some areas which are dangerous c. NO-WAY, it won't work without major modifications We are doing mutex locking around pvm_send and pvm_nrecv calls, however we are not sure what happens underneath the buffer management code, when other threads are performing I/O calls at the same time. In our test case, we are spawning about 5 jobs, which send and return 250 Kbyte buffers each. Is this going to be a problem? Thanks for any info you can give, we appreciate as quick an initial response as possible, as we were hoping to get something up and running by Wednesday. Regards, -Andrew `''' Andrew "Fuz" Lih Columbia University Computer Science c @@ lih@cs.columbia.edu Central Research Facilities \ - Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Tim J Harris Subject: San Diego Supercomputer Center? Organization: Department of Computer Science, University of Edinburgh Hello there, I'm trying to find out information on the San Diego Supercomputer Center. Could anyone who works there or near by please write me? I'm wondering what kind of post-doc positions there are for someone with extensive experience with parallel and super computers, but any info about the center would be a help. Regards, Tim Harris =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Department of Computer Science JCMB, The King's Buildings email: harris@castle.ed.ac.uk University of Edinburgh, Scotland tel : (031) 650-5118 EH9 3JZ fax : (031) 650-7209 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: kms@doc.ic.ac.uk (Keith M Sephton,225H,5085,) Newsgroups: uk.jobs.offered,comp.parallel Subject: Job Advert: Research Assistant in Parallel Computing for Decision Making Date: 13 Dec 1993 17:25:26 -0000 Organization: Department of Computing, Imperial College, University of London, UK. I am posting this on behalf of John Darlington, if you have any queries about this job please contact the email address at the end of the advert... Department of Computing Imperial College of Science, Technology & Medicine Research Assistant in Parallel Computing for Decision Making Applications are invited for a research assistantship in a SERC funded research project on the development of decision algorithms on parallel machines under the direction of Professor John Darlington and Dr Berc Rustem. The application areas are economic decisions and chemical engineering process control. The basic methodology to be developed and implemented is numerical model simulation and optimisation. Applicants should have knowledge of numerical methods. Knowledge of one of the following areas will be an advantage but is not essential: optimal control, optimisation theory, parallel numerical computing. Funding for the post is available at the post-doctoral level (16347 + LA 2134) but promising graduates at a more junior level might also be considered. The post starts 1 March 1994 and is for three years. Applicants should send their CV with the names of two referees by 10 January 1994 to Mrs Sue Brookes, Department of Computing, Imperial College, 180 Queen's Gate, London SW7 2BZ. (e-mail smb2@doc.ic.ac.uk) -- K. M. Sephton, Systems Programmer, | Email: kms@doc.ic.ac.uk Department of Computing, | Imperial College, | Tel: +44 71-589 5111 extn 5085 180 Queen's Gate, London SW7 2BZ | Fax: +44 71-581 8024 Newsgroups: alt.image.medical,comp.graphics,comp.graphics.algorithms,comp.graphics.animation,comp.graphics.avs,comp.graphics.opengl,comp.graphics.explorer,comp.graphics.data-explorer,comp.graphics.visualization,comp.human-factors,comp.sys.super,comp.sys.sgi.graphics,comp.sys.hp,comp.sys.dec,comp.soft-sys.khoros,comp.soft-sys.wavefront,comp.parallel,comp.parallel.pvm,news.announce.conference Subject: CFP: 1994 Symposium on Volume Visualization To: comp-parallel@uunet.UU.NET Path: nobody From: ebert@cs.umbc.edu (Dr. David Ebert) Date: 10 Dec 1993 13:56:42 -0500 Organization: U. Maryland Baltimore County Computer Science Dept. 1994 Symposium on Volume Visualization October 17-18, 1994 Washington, DC Call for Participation Following our three successful meetings (the Chapel Hill '89, San Diego '90, and Boston '92 Workshops on Volume Visualization), this fourth meeting will provide the opportunity for demonstrations of new developments in this evolving area. Scientists from all disciplines involved in the visual presentation and interpretation of volumetric data are invited to both submit and attend this Symposium. The Symposium is sponsored by ACM-SIGGRAPH and the IEEE Computer Society Technical Committee on Computer Graphics. This Workshop will take place during the week of October 17-21, 1994 at the Sheraton Premiere at Tyson Center Hotel in Washington DC area, in conjunction with the Visualization '94 Conference. Six copies of original material should be submitted to the program co-chairs on or before March 31, 1994. Authors from North America are asked to submit their papers to Arie Kaufman. All others are to submit their papers to Wolfgang Krueger. Suggested topics include, but are not limited to: * Volume visualization of unstructured and irregular grids. * Parallel and distributed volume visualization. * Hardware and software systems. * Validation and control of rendering quality. * Volume segmentation and analysis. * Management, storage, and rendering of large datasets. * User interfacing to volume visualization systems. * Acceleration techniques for volume rendering. * Fusion and visualization of multimodal and multidimensional data. * Visualization of non-scalar volumetric information. * Modeling and realistic rendering with volumes. * Discipline-specific application of volume visualization. Papers should be limited to 5,000 words and may be accompanied by an NTSC video (6 copies, please). The accepted papers will appear in the Symposium Proceeding that will be published by ACM/SIGGRAPH and will be distributed to all SIGGRAPH Member "Plus". Program Co-chairs: Arie Kaufman Wolfgang Krueger Computer Science Department Dept. of Scientific Visualization, GMD-HLRZ State University of New York P.O. Box 1316, Schloss Birlinghoven Stony Brook, NY 11794-4400 D-5205 Sankt Augustin 1 GERMANY Telephone: 516-632-8441/8428 Telephone: +49 (2241) 14-2367 Fax: 516-632-8334 Fax: +49 (2241) 14-2040 Email: ari@cs.sunysb.edu Email: krueger@viswiz.gmd.de Symposium Co-chairs: Roni Yagel Holly Rushmeier Dept. of Computer Science Rm. B-146, Bldg. 225 The Ohio State University NIST 2036 Neil Av. Columbus, OH 43210 Gaithersburg, MD 20899 Telephone: 614-292-0060 Telephone: 301-975-3918 Fax: 614-292-2911 Fax: 301-963-9137 Email: yagel@cis.ohio-state.edu Email: holly@cam.nist.gov Program Committee: Nick England - University of North Carolina, Chapel Hill Pat Hanrahan - Princeton University Marc Levoy - Stanford University Bill Lorensen - General Electric Co. Nelson Max - Lawrence Livermore National Labs Greg Nielson - Arizona State University Sam Uselton - CS Corp - NASA Ames Jane Wilhelms - University of California at Santa Cruz Symposium Committee: David Ebert - University of Maryland, Baltimore County Todd Elvins - San Diego Supercomputer Center Larry Gelberg - AVS -- -- Dr. David S. Ebert, Computer Science Department, University of Maryland, -- --- Baltimore County; 5401 Wilkens Ave., Baltimore, MD USA 21228-5398 ------- ------ ebert@cs.umbc.edu or ..!{att,pyramid,killer}!cs.umbc.edu!ebert -------- ------------------------------------------------------------------------------- Newsgroups: comp.edu,comp.ai,comp.graphics,sci.chem,comp.parallel Path: edmond From: edmond@tripos.com (Edmond Abrahamian) Subject: PhD work off-campus? Message-ID: Summary: off-campus PhD research anywhere? Keywords: PhD off-campus-research Organization: Tripos Associates, Inc. Date: Sat, 11 Dec 1993 06:54:25 GMT Date: Sat, 11 Dec 93 00:54:35 CST From: tripos!edmond@uunet.UU.NET (Edmond Abrahamian) To: uunet!comp-parallel@uunet.UU.NET Are there universities that allow PhD candidates to work on their research away from the campus? In my particular case, I am unable to completely quit my job (for monetary reasons) to pursue postgraduate work, yet my job offers a particularly good environment for research in the areas of molecular modelling, molecular mechanics, 3-d compound searching, computer graphics, parallel processing, algorithmics, and artificial intelligence, among possible others. I am interested in working towards a PhD in computer science. I am seeking a program that would allow me to do doctoral research work off-campus. Are universities in Europe more receptive to this idea than those in the U.S.? In particular, I hear that course work is not mandatory there. Can anyone help me at all on this subject? I apologize if this posting is not appropriate for this newsgroup. thanks, Edmond --------------------------------------------------------------------------- Edmond Abrahamian voice +1 314 647 8837 ext 3281 Tripos Associates fax +1 314 647 9241 1699 S. Hanley Rd. Suite 303 email tripos.com!edmond@wupost.wustl.edu St.Louis MO 63144 USA --------------------------------------------------------------------------- Newsgroups: ch.general,comp.arch.parallel-sym,comp.lang.fortran,comp.lang.pascal,comp.parallel,comp.simulation,comp.sys.hp.misc,comp.sys.ibm.pc.misc To: comp-parallel@uunet.UU.NET Date: Mon, 13 Dec 1993 11:56:49 +0100 From: NEWS Manager Sender: news@cscs.ch Path: news From: pc94@cscs.ch Subject: PC94 Second Announcement Message-ID: <1993Dec13.105638.6173@cscs.ch> Keywords: High Performance Computing Sender: pc94@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico, CH-6928 Manno Date: Mon, 13 Dec 1993 10:56:38 GMT --------------------------------------------------------------------------- PC'94 (PHYSICS COMPUTING '94) The 6th Joint EPS-APS International Conference on Physics Computing Lugano, Switzerland, Palazzo dei Congressi August 22-26, 1994 (Second Announcement) --------------------------------------------------------------------------- Jointly organized by the Swiss Scientific Computing Center (CSCS), Manno, Switzerland, belonging to the Swiss Federal Institute of Technology, Zurich (ETHZ), the EPS Interdisciplinary Group on Computational Physics and by the American Physical Society, Division of Computational Physics. Sponsored by EPS-APS, ETHZ, Computers in Physics, NEC Scientific Program --------------------------------------------------------------------------- PC'94 will give a detailed overview on the newest research results and developments in computational methodology for academia and industry, including invited papers, contributed papers, poster sessions, tutorials and vendor exhibitions. The purpose of the conference is to bring together researchers interested in innovative approaches in computational physics. Special emphasis will be given to algorithmical and high- performance computer implementation issues. Tutorial sessions organized by leaders in their respective fields will be held on the first conference day. Invited Speakers and their presentation topics (August 23-26, 94) --------------------------------------------------------------------------- Molecular Dynamics W. Andreoni, IBM Research Centre, Ruschlikon, CH L. Colombo, University of Milan, I V. Stepanyuk, Lomonosov State Univ. Moscow, Russia Physics Education C. Rebbi, Physics Dept., Boston Univ., USA Chaos and Dynamical Systems V. Demmel, MPI fur Extraterrestrische Physik, Garching, D Mesoscopic Physics H. deRaedt, Univ. of Groningen, NL E. Kaxiras, Dept. of Physics, Harward Univ., USA Electronic Structure of Molecules and Materials L. Ixaru, Inst. of Physics, Univ. of Bucharest, R You Zhou, Physics Inst., Univ. of Zurich, CH Plasmas U. Schwenn, MPI fur Plasma Physik, Garching, D Fluids and Turbulence T. Poinsot, CERFACS, Toulouse, F S. Lanteri, Inria, F Climate Modeling J. Hunt, Weather Office, Bracknell, GB Parallel Computation in Physics K. Bowler, Dept. of Physics, Univ. of Edinburgh, GB G. Meurant, CEA, Dept. Mathematiques Appliquees, F M.C.Payne, Dept. of Physics, Univ. of Cambridge, GB Monte Carlo and Critical Phenomena K. Binder, Physics Inst., Johannes Gutenberg-Univ., Mainz, D I. Morgenstern, Physics Inst., Univ. of Regensburg, D Industrial Applications B. Larrouturou INRIA, Sophia-Antipolis, F P. Sguazzero IBM ECSEC Center, Rome, I Self-Organization and Growth Phenomena M. Peyrard, Faculty of Science, Univ. of Lyon, F J. Kertesz, Bit, Budapest, H Cellular Automata and Lattice Gases B.Chopard, Parallel Computing Group, Univ. of Geneva, CH Numerical Methods I.M.Sobol, Inst. Math. Modelling, Moscow, Russia S. Obara, Dept. of Chemistry, Kyoto Univ., J Tutorials (Monday, August 22, 1994) --------------------------------------------------------------------------- 1 - Parallel Computation (1 day) 2 - Visualization Tools and Techniques (1 day) 3a - Introduction to Finite Elements Methods (1/2 day) 3b - Structured and Unstructured Adaptive Grids (1/2 day) 4a - Wavelets and Fourier Analysis (1/2 day) 4b - Electronic Structure Calculations (1/2 day) 5a - Introduction to Neural Networks (1/2 day) 5b - Symbolic Computation (1/2 day) Paper Submission (before February 28, 1994) --------------------------------------------------------------------------- Original Papers should be limited to 4 pages and submitted before February, 28, 1994. Formatting details will be given in due time. Please submit 4 copies of all materials. Accepted papers will be included as posters or as oral presentations. Papers will only be accepted if at least one of the authors has registered. In contrast with the arrangements at previous conferences in the Physics Computing series, the papers will be printed prior to PC'94 in a proceedings volume, that will be distributed to participants at the beginning of the conference. Registration --------------------------------------------------------------------------- Registration fee: (1) EPS member and affiliated societies ONLY: I am sending Sfr. 370.- (Sfr. 420.- after May 1994) to the account (*). (2) I am sending Sfr. 400.- (Sfr. 450.- after May 1994) to the account (*). (*) Account: "PC'94" Nr. JE-150.761.0 Swiss Bank Corporation (SBS) CH-6982 Agno, Switzerland At the conference there is the possibility to buy daily tickets at Sfr. 200.- /day. The registration fee covers the access to the conference and the proceedings book of the conference. Refund of the registration fee (less 10% administration costs) will only be granted if notification of cancellation has reached the conference secretariat before August 1, 1994. Lunches: Sfr. 25.- /day, payable at the conference Tutorials: (Students 50%) Sfr. 200.- /half day Sfr. 300.- /day A tutorial takes place only if at least 10 participants have registered. Registration Form ------------------------------------------------------ >-8 ---------------- Family Name: ____________________________________________________________ First Name: _____________________________________________________________ Organization: ___________________________________________________________ Mailing Address: ________________________________________________________ ZIP/City/State _________________________________________________________ Country: ________________________________________________________________ Phone: ____________________________ Fax: ________________________________ Email: __________________________________________________________________ Tutorials: 1 ___ 2 ___ 3a ___ 3b ___ 4a ___ 4b ___ 5a ___ 5b __ Date:______________________ Signature:___________________________________ ---- 8-< ------------------------------------------------------------------- PC '94 Scientific Advisory Committee --------------------------------------------------------------------------- Dr. Ralf Gruber (chairman) CSCS, Manno, CH Dr. Robert Allan SERC, Daresbury, GB Dr. David Anderson Univ. of California, Livermore, USA Prof. Alfonso Baldereschi EPFL, Lausanne, CH Dr. Ernesto Bonomi CSR4, Cagliari, I Prof. Roberto Car Univ. of Geneva, CH Prof. Robert De Groot Univ. of Nijemegen, NL Prof. Lev Degtyarev Keldysh Institute, Moscow, Russia Prof. Geerd Diercksen Max-Planck-Institut, Gerching bei Munchen, D Prof. Wolfgang Fichtner ETH Zentrum, Zurich, CH Prof. Roland Glowinski CERFACS, Toulouse, F Dr. Frederick James CERN, Geneva, CH Dr. Richard Kenway Edinburgh Univ., GB Prof. Peter Meier Univ. of Zurich, CH Dr. Jaroslav Nadrchal Academy of Sciences, Praha, CZ Prof. Risto Nieminen Helsinki University, Espoo, Finland Dr. Elaine Oran Naval Research Lab., Washington DC, USA Dr. Jacques Periaux Dassault Aviation, Saint-Cloud, F Dr. Adnan Podoleanu Polytechnical Institute of Bucharest, Russia Dr. Sauro Succi IBM ECSEC, Roma, I Dr. Toshikazu Takada NEC Corp. Tsukuba, Ibaraki, J Dr. Marco Tomassini CSCS, Manno, CH Prof. Tanias Vicsek Eotvos University, Budapest, H Hotel Information --------------------------------------------------------------------------- Hotel reservations may be arranged through the Palazzo dei Congressi in Lugano (Tel: +41/91/21 4774, Fax: +41/91/22 0323). Early registration is advisable in order to arrange proper accommodation. Social Programme --------------------------------------------------------------------------- A social programme is in preparation. An accompanying person's programme will be organized at the conference. Additional Informations --------------------------------------------------------------------------- The final program of the conference will be sent in June 1994. Inquiries about the Conference. PC'94 Centro Svizzero di Calcolo Scientifico CH-6928 Manno (Switzerland) Tel: +41/91/508211 Fax: +41/91/506711 E-mail: pc94@cscs.ch Official Carrier --------------------------------------------------------------------------- Swissair Crossair Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ukola@kastor.ccsf.caltech.edu (Adam Kolawa) Subject: Parallel Programming Course Offered Date: 13 Dec 1993 19:15:08 GMT Organization: ParaSoft Corporation Parallel/Distributed Programming Course using PVM offered. ParaSoft Corporation, the leader in distributed and parallel computing tools, will conduct a hands on, introductory course on the theory and practice of distributed and parallel computing. ParaSoft has extended their class offerings to now include PVM. The unique, hands-on focus of this course, 75% of the total time, assures that participants will gain a practical understanding of distributed computing applications. Each participant will program on a workstation linked to a network within the lab, to demonstrate and verify theoretical concepts presented in the seminar. Course Goals: Upon completion of the course, the participant will be able to: 1. Set up a simple job dispatcher with dynamic load balancing. 2. Build an application which runs on multiple platforms. 3. Implement process communication for tightly coupled applications. Course Content: 1. Theory - Introduction to parallel/distributed computing, programming models, programming environments. 2. Labs - Machine setup, Running parallel/distributed programs, basic parallel/distributed I/O, message passing, global operations, data decomposition, heterogeneous computing. Prerequisites: 1. Working knowledge of C or Fortran. 2. Familiarity with Unix 3. Strong desire to learn about distributed computing. Dates : Thursday, February 10 - Friday, February 11 Location : ParaSoft Offices - Pasadena, CA Instructors: Dr. Adam Kolawa - Experienced Parallel/Distributed Software Developer and lecturer on distributed computing Lab Setup: Each participant will develop distributed applications at a workstation on a network within the lab. Cost: $495 - includes a complete set of tutorial materials. Participation is limited, Please call or send email to ParaSoft early to reserve your space. Applications are accepted on a first-come, first-serve basis. We will be glad to help you arrange travel and hotel accommodations. For more information contact: ParaSoft Corporation 2500 E. Foothill Blvd. Pasadena, CA 91107-3464 voice: (818) 792-9941 fax : (818) 792-0819 email: info@parasoft.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cary@esl.com (Cary Jamison) Subject: Re: Data-parallel languages (non-CM C*)? Date: 13 Dec 1993 19:57:36 GMT Organization: ESL, Inc. A TRW Company References: <1993Dec7.155305.4663@hubcap.clemson.edu> <1993Dec10.135202.11275@hubcap.clemson.edu> Nntp-Posting-Host: macm546.esl.com In article <1993Dec10.135202.11275@hubcap.clemson.edu>, you wrote: > > "cary" == Cary Jamison writes: > cary> Can't say that it's an emerging standard, but HyperC seems > cary> promising. It is running on workstation clusters (usually built > cary> on PVM), CM, MasPar, and is being ported to others such as > cary> nCube. > > Is this similar to Babel's HyperTasking stuff he did while at Intel? I don't think it's at all related. HyperC is made by HyperParallel Technologies in France. It was developed there at a supercomputer center (can't remember the name right now) and is being marketed in the US by Fortunel Systems, Inc. I don't want this to get too commercial, so if anyone is interested in how to contact Fortunel, send me a private email. I'm not an expert on HyperC, just recently attended a seminar on it. It is a commercial product. Someone questioned whether it is running on MasPar or not. I thought they mentioned at the seminar that it was, but I could be wrong--wouldn't be the first time! Cary ******************************************************************** EEEEE SSS L Excellence Cary Jamison E S L Service cary@esl.com EEEE SSS L Leadership E S L EEEEE SSS LLLLL A TRW Company ******************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mhchoy@ucsb.edu (Manhoi Choy) Subject: asynchronous IO Date: 13 Dec 1993 12:08:20 -0800 Organization: University of California, Santa Barbara Sender: mhchoy@cs.ucsb.edu I am trying to find a standard on message passing interface that supports asynchronous IO. e.g I would like to be able to set up interrupt routines to handle messages and be able to send out messages asynchronously. Existing tools such as PVM or P4 do not seem to support this. (Correct me if I am wrong.) Is there a reason why asynchronous IO is not supported? Are there anyone trying to include asynchronous IO in their "standard"? Manhoi Choy Department of Computer Science University of California at Santa Barbara Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: thomas@wein12.elektro.uni-wuppertal.de (thomas faerbinger) Subject: ?Hyper-C (Re: Data-parallel languages (non-CM C*)?) Organization: University of Wuppertal References: <1993Dec7.155305.4663@hubcap.clemson.edu> <1993Dec9.142826.1645@hubcap.clemson.edu> Reply-To: thomas@wein01.elektro.uni-wuppertal.de (thomas faerbinger) In article <1993Dec9.142826.1645@hubcap.clemson.edu> cary@esl.com (Cary Jamison) wrote: |> In article <1993Dec7.155305.4663@hubcap.clemson.edu>, richards@wrl.EPI.COM |> (Fred Richards) wrote: |> > |> > Is any data-parallel language emerging as a standard, |> > much as PVM seems to be as a message-passing library? |> > |> > Does C*, or something *very* similar, run on any of the |> > other MPP machines (Intel, nCube, MasPar, etc.) |> |> Can't say that it's an emerging standard, but HyperC seems promising. It |> is running on workstation clusters (usually built on PVM), CM, MasPar, and |> is being ported to others such as nCube. Is (the PVM-version of) Hyper-C ftp-able somewhere? ( where? ) I'm fumbling around with PVM ( on a small workstation-cluster running ULTRIX ), CMMD and C* (on a CM5) and would like to complete this choice for some kind of comparison. Thanks a lot! Thomas Faerbinger thomas@wein01.elektro.uni-wuppertal.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dewombl@cs.sandia.gov (David E. Womble) Subject: Re: Information on Paragon Keywords: Paragon performance Organization: Sandia National Laboratories, Albuquerque, NM References: <1993Dec6.143410.4616@hubcap.clemson.edu> <1993Dec10.135148.11153@hubcap.clemson.edu> In article <1993Dec10.135148.11153@hubcap.clemson.edu> hcl@fai.com (Han Lung) writes: > >This is a request for information. In Jack Dongarra's Linpack >benchmark report, a 1872-processor Paragaon is listed as having a peak >performance of 94 GFLOPS, which translates to 50 MFLOPS/processor. The >i860/XP microprocessor used in the Paragon, however, has a peak speed >of 75 MFLOPS (1 multiply/2 clocks + 1 add/clock @ 50 MHz). We believe >that the Paragon should be rated at 140 GFLOPS. (In other words, >Paragon's effiency should be lower by one-third.) Jack Dongarra >contends that, since most applications do a multiply and add together, >they cannot make use of 1 add/every other clock, thus 50 MFLOPS (1 >multiply/2 clocks + 1 add/2 clocks = 2 ops/2 clocks = 1 op/clock). >However, an HPCwire report dated Oct. 28, 1993 states: > >> 2180) SANDIA BREAKS 100 GFLOPS MARK ON 1,840-NODE PARAGON SYSTEM 44 Lines >> Albuquerque, N.M. -- Scientists at Sandia National Laboratories have >> achieved record-setting performance for the second time in as many months, >> as they recorded 102.050 GFLOPS on a double-precision complex LU > ^^^^^^^^^^^^^^ >> factorization running on their full 1,840-node Intel Paragon supercomputer. > >The figures imply that each node runs at 55.5 MFLOPS (102 GFLOPS/1840). >This exceeds the per-node peak performance of the 1872-node Paragon >listed in Table 3 of the Linpack report dated Nov. 23, 1993 (94 >GFLOPS/1872 = 50 MFLOPS/node). Unless the clock rate of the Paragon >used at Sandia is substantially faster than 50 MHz, the peak rating >which appears in Table 3 cannot be correct. > >Does anyone know, or can anyone find out, the exact processor count and >the clock rate for the Paragon installed at Sandia? > >As an aside, I don't see how one can get more than 50 MFLOPS from a >processor, even for a complex multiply/add (2 +'s for add, 4 x's and 2 >+'s for multiply = 4 x's and 4 +'s, which takes 8 clocks, so on average >1 op/clock, which gives 50 MFLOPS. Any ideas on how to make use of the >additional add? > As one of the authors of the Sandia code that achieved 102 gigaflops/second, I can clarify the details of both the Intel Paragon at Sandia and the complex LU factorization code. Sandia's Paragon: 1840 compute nodes, each with an i860 processor running at 50 MHz for computation. The 1840 compute nodes have 37.6 gigabytes of memory. (512 nodes have 32 megabytes and the remaining nodes have 16 megabytes.) 64 disk nodes, each with a five disk RAID array storing 5 gigabytes 9 service nodes for ethernet, HiPPI, or DAT connections or for user logins. 2D mesh connection network with individual links capable of 200 megabyte/second transfer rates. The physical processor mesh (of compute nodes) is 16x115. Complex LU factorization code: Factored a 42,000 x 42,000 double precision complex random matrix. Storage of the matrix required 28.2 gigabytes. The matrix is decomposed to processors in a torus-wrap (or scattered) decomposition. For this run the torus-wrap mapping matches the physical mesh (16 x 115 processors) The code performs partial pivoting based on a search of a column for the entry of largest magnitude. The code runs under the SUNMOS operating system. SUNMOS provides a small, fast, message passing kernel on each compute node. The kernel requires only 250 kilobytes of memory, and can achieve communication speeds of 175 megabytes/second (for large messages) The code uses the BLAS 3 routine ZGEMM written for Intel by Kuck and Associates and distributed with the Paragon as its computational kernel. The "same" code doing a real LU factorization achieves 72.9 gigaflops, which is the number appearing in Dongarra's linpack report. The previous poster's "real" question had to do with how performance of 55.5 megaflops/node was achieved, and the answer here lies in the implementation of the ZGEMM routine in the level 3 BLAS. In particular, ZGEMM uses a Winograd algorithm on 1 x 2 blocks to exchange 8 multiplications and 8 additions for 6 multiplications and 10 additions. NOTE that the number of floating point operations does NOT change (16 in both cases), but the mix of additions and multiplications does change. The theoretical peak for the ZGEMM routine is thus 66.7 megaflops/node (assuming that the additions can be fully overlapped and that the six multiplications take 12 clock cycles). The operation count used for the LU factorization is 8n^3/3; lower order terms in the operation count are not added in. In conclusion, the peak speed of the Intel Paragon at Sandia is 140 gigaflops/second, and some applications have, or can be made to have, an operation mix that matches the architecture. ============================= David E. Womble Sandia National Laboratories Albuquerque, NM 87185-1110 (505) 845-7471 (voice) (505) 845-7442 (fax) dewombl@cs.sandia.gov (email) ============================= Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: jab@maths.uq.oz.au (John Belward) Newsgroups: uq.general,aus.jobs,aus.parallel,comp.parallel,comp.parallel.pvm Subject: Post Doc & PhD Schol in Queensland Australia Organization: Maths, University of Queensland Nntp-Posting-Host: axiom.maths.uq.oz.au Sender: news@cc.uq.oz.au POST DOCTORAL RESEARCH FELLOWSHIP AND PH.D. SCHOLARSHIPS Centre for Industrial and Applied Mathematics and Parallel Computing (CIAMP) High Performance Computing Unit (HPCU) DEPARTMENT OF MATHEMATICS THE UNIVERSITY OF QUEENSLAND Professor Kevin Burrage and A/Professor John Belward have been awarded a 3 year ARC Collaborative Research Grant to work on the development of an integrated software environment on a supercomputer platform for land management systems in conjunction with the Queensland Department of Primary Industries. Post Doctoral applicants should have a strong background in scientific computing and have experience with vector and/or MIMD parallel programming. A working knowledge of GIS software and Database systems would also be useful. The appointment will be for a period of 3 years. Salary: \$36,285 per annum Closing date: 15 February 1994 CIAMP and the HPCU have entered into a substantial collaborative agreement with the Queensland Department of Primary Industries (QDPI) to develop and implement parallel computational algorithms with spatial modelling and environmental applications. Ph. D. Scholarships are available from CIAMP as top ups for 1994--1996 for two students with new APAs. These scholarships will each be in excess of \$5,000 for each of the three years. Applicants should have a strong background in Mathematics and Computer Science. These appointments will provide opportunity for close collaboration with a prestigious Government establishment (QDPI) on problems of national importance; access to state of the art hardware including an advanced computational lab of SUN workstations, a Silicon Graphics INDIGO, a DEC alpha workstation and colour printer; network access to a Cray YMP-2D and 4096 processor MasPar MP1 sited at the University of Queensland; a stimulating research environment of approximately 12 Ph.D students and two research fellows working on various aspects of scientific computing. Further details may be obtained from: Professor K. Burrage: phone (07) 365 3487, email address kb@maths.uq.oz.au Dr J. Belward: phone (07) 365 3257, email address jab@maths.uq.oz.au. Please forward applications and resume to the Head, Department of Mathematics, The University of Queensland, Qld 4072, fax (07)8702272. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tanner@nas.nasa.gov (Leigh Ann Tanner) Subject: Intel Supercomputer Users' Group Meeting Nntp-Posting-Host: sundog.nas.nasa.gov Organization: NAS/NASA-Ames Research Center Date: Tue, 14 Dec 1993 01:08:23 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov Mark Your Calendars!! The Intel Supercomputer Users' Group Meeting will be held January 26-29, 1994 in San Diego, California. A Request for Papers will be posted soon. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: lancer@cs.montana.edu (Lance Kind) Subject: Generating a random number for a shape with C* for the CM2 Date: 14 Dec 1993 08:28:37 GMT Organization: Computer Science, MSU, Bozeman MT, 59717 I've been trying to generate a random (rnd) number for a 2-D shape without success. nrand48, lrand48 and, it seems, any other fct that uses an external seed cause a core dump. rnd(void) works but it just copies the same rnd # to every element of the shape. This is why I wanted to use a rnd fct which takes an external seed cause then I could do: shape = nrand48(shape); <- where the shape input for the seed has unique numbers. But alas, like I said the bugger causes a core dump. Anyone have a better way of creating rnd numbers for an entire shape? Please respond via my mail address: lancer@fubar.cs.montana.edu ==>Lancer--- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cheekong@iss.nus.sg (Chui Chee Kong) Subject: performance Date: 14 Dec 1993 09:38:47 GMT Organization: Institute Of Systems Science, NUS I am trying to compare the performance of a distributed codes I have written for cluster of workstations. The codes use Gaussian elimination method to solve a set of linear systems of equations. both pivoting and no pivoting are considered. will appreciate if someone can supply me with info on the fastest serial or parallel software (preferably public available), how to get the software, performance figures on other parallel machines etc. Thanks. chee kong internet: cheekong@iss.nus.sg Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm,sci.math.num.analysis From: erlendh@zapffe.mat-stat.uit.no (Erlend Helmersen) Subject: Parallel 2D-fft Sender: news@uit.no (News admin.) Date: Tue, 14 Dec 1993 11:02:23 GMT Organization: University of Tromsoe Keywords: fft, parallel Apparently-To: hypercube@hubcap.clemson.edu -- Hello everybody. I am about to implement a numerical method to solve a two-dimensional (in space), nonlinear, time-dependent system of partial differential equations using a spectral (fourier) method. I am so lucky to have access to 98 compute node Intel Paragon which I am considering to use instead of a Cray Y/MP-464. My problem is that on the Paragon there exists no implementation of a 2D-fft routine. Since much of the computations in the numerical scheme lies in the fft's there is much to gain in speed by having a good implementation of this fft routine. My question to you is: Is there an implementation of a 2D-fft routine using NX (the message passing system on the Paragon which is faster than PVM), or in PVM. I really hope someone is able to help on this. Please email me. ------------------------------------------------------------------------------- Erlend Helmersen E-mail : erlendh@math.uit.no Institute of Mathematical and Physical Sciences Phone : intl. + 83 44016 University of Tromsoe Fax : intl. + 83 55418 N-9037 TROMSOE, Norway ------------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ukeller@pallas-gmbh.de (Udo Keller) Subject: MPI Workshop, 2nd Announcement Date: 14 Dec 1993 13:14:52 +0100 Organization: PALLAS GmbH Reply-To: mpi-ws@pallas-gmbh.de Second Announcement E U R O P E A N M P I W O R K S H O P MPI, the new standard for message-passing programming, has been published recently. It is the aim of the European MPI Workshop to organize the dissemination of MPI in Europe and to collect the European developers' view on MPI. The MPI Workshop will bring together European software developers with experience in parallel computing, in particular message passing. Date: January 17/18, 1994 January 18/19, 1994 (MPI Committee only) Location: INRIA Sophia Antipolis (near Nice) Organized by: PALLAS, GMD, INRIA (for the ESPRIT project PPPE) Registration fee: 70 ECU or 450 FF (75 US$) (to be paid cash at registration) Accomodation: A block reservation has been made in the Hotel OMEGA (3 stars) in Sophia Antipolis (800 m from INRIA). Prices are 390 FF (65 US$) for a single room, 500 FF (83 US$) for a double room. The block reservation and prices are valid until January 5, 1994. After this time accomodation in the Hotel Omega is not guaranteed. Transport: A shuttle bus will be going from Nice airport to INRIA on Monday (17th) morning at 11.00, and from INRIA to Nice airport on Tuesday and Wednesday afternoon. Otherwise taxi transport is recommended. Bus tranfer from Hotel OMEGA to INRIA and back is provided. Registration fee: The registration fee covers refreshments, lunches, transport and the reception on Monday night. Due to the short notice no cheque or bank transfer payments can be accepted. As usual at the preceeding MPI meetings in the US, please pay the registration fee (450 FF) in cash. How to register: Please fill out the enclosed registration form and send it to Monique Simonetti, INRIA Sophia Antipolis (full address is on the enclosed registration form). PPPE project partners should also register. You will be informed if your registration could not be accepted. (Because of capacity limitations, the maximum number of participants is 80.) More information: For more information about the European MPI Workshop and to receive the MPI standard document, please contact PALLAS: mpi- ws@pallas-gmbh.de. ------------------------------------------------------------------------------ Tentative Agenda E U R O P E A N M P I W O R K S H O P Monday, January 17: 11:00 - 13:00 Registration 13:00 L U N C H 14:00 Welcome and introduction 14:30 Basic MPI 15:00 Advanced concepts in MPI message-passing 15:30 Writing libraries with MPI 16:00 Process topologies 16:30 Implementing MPI 17:00 Discussion 18:00 Outlook: MPI-2 19:30 R E C E P T I O N and B U F F E T D I N N E R Tuesday, January 18: 09:00 Requirements of message-passing code developers* 11:00 Statements of hardware and software vendors* 11:30 Migration from existing message-passing libraries to MPI* 11:45 Discussion 13:00 End of European MPI Workshop 13:00 L U N C H 14:00 MPI Committee Meeting 20:00 D I N N E R (cost not included in registration fee) * Contributions from code developers, software companies and hardware vendors are highly welcome. Please contact karls@pallas-gmbh.de. Wednesday, January 19: 09:00 MPI Committee Meeting continued 13:00 End of MPI Committee Meeting 13:00 L U N C H For PPPE partners: The PPPE meeting will start on Tuesday, January 18 at 14.00. ----------------------------------------------------------------------------- Registration Form E U R O P E A N M P I W O R K S H O P January 17-19, 1994, INRIA Sophia Antipolis Name ______________________________________________________________ Organization ______________________________________________________________ Full address ______________________________________________________________ Phone_______________ fax _______________ email _____________ Hotel reservation I book in the Hotel OMEGA, Sophia Antipolis __ single room from Jan __ until Jan __ (= _ nights) (390 FF per night) __ double room from Jan __ until Jan __ (= _ nights) (500 FF per night) I want to participate at the dinner on Tuesday, January 18 night: yes ___ no ___ Travel plans To organize transport from Nice airport to INRIA, please indicate your travel plans Arrival at Nice Airport: date/time: __________ flight from: ________________ Departure from Nice Airport: date/time: __________ flight to: __________________ Please send this form to Monique Simonetti Phone: +33-93 65 78 64 Relations Exterieurs Fax: +33-93 65 79 55 Bureau des Colloques email: simoneti@sophia.inria.fr INRIA Sophia Antipolis 2004, Route de Lucioles BP 93 F-06902 SOPHIA ANTIPOLIS CEDEX France ----------------------------------------------------------------------------- Flight Information E U R O P E A N M P I W O R K S H O P The most convenient airport to get to INRIA Sophia Antipolis is Nice. Here are some flight connections (other direct flights to these and other destinations are available) Paris (Orly) - Nice Nice - Paris (Orly) Jan 16: 17.55 - 19.15 Jan 18: 18.00 - 19.30 18.25 - 19.45 19.05 - 20.25 Jan 17: 09.20 - 10.40 Jan 19: 08.05 - 09.25 09.35 - 11.05 09.05 - 10.25 Brussels - Nice Nice - Brussels Jan 16: 14.30 - 16.10 Jan 18: 17.00 - 18.40 Jan 17: 09.35 - 11.15 Jan 19: 12.05 - 13.45 Frankfurt - Nice Nice - Frankfurt Jan 16: 16.20 - 17.55 Jan 18: 18.35 - 20.20 Jan 17: 08.30 - 10.00 Jan 19: 10.40 - 12.25 Amsterdam - Nice Nice - Amsterdam Jan 16: 13.30 - 15.20 Jan 18: 16.05 - 18.10 Jan 17: 09.05 - 10.55 Jan 19: 11.40 - 13.45 -- ---------------------------------------------------------------------------- Udo Keller phone : +49-2232-1896-0 PALLAS GmbH fax : +49-2232-1896-29 Hermuelheimer Str.10 direct line: +49-2232-1896-15 D-50321 Bruehl email : ukeller@pallas-gmbh.de ---------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: enbody@ss65.cps.msu.edu (Richard Enbody) Newsgroups: comp.parallel,comp.sys.super Subject: Re: Performance figures for the Intel Paragon... Organization: Michigan State University, CPS Department References: <1993Dec13.201329.22625@hubcap.clemson.edu> I don't have the exact data you requested, but I do have some infomation that might help you interpret other's data. In article <1993Dec13.201329.22625@hubcap.clemson.edu>, "Brian D. Alleyne" writes: |> Measurement times on the Intel Paragon... |> |> Would anyone have the following data? |> |> >From the time that you decide to seed a message, how long does it take |> (software overhead) to send a message to the network. |> (ie. this does not include the latency of the network, just the time to |> launch a message). |> |> What is the transfer time for a random communication. |> (ie, every processing node picks another at random, and sends a |> message there at the same time. Want the time for all messages to |> get to their destinations for a small message ~ 64 bytes or less). |> I've ping-ponged messages on wormhole-routed machines, both Symult and Intel Touchstone Delta, and found that very small messages that you are asking about don't interfere with each other on the Delta. I didn't use random patterns, but used patterns where I maximized contention. If they didn't interfere with my pattern, they will not with a "random" pattern. |> What is the transfer rate for a random communication. |> (ie, every processing node picks another at random, and |> sends a message there. Message should be of the order of 64kbytes). |> Now, if you want to send hundreds of bytes or, in your case, thousands of bytes, you can see a noticable effect from contention on the Delta. Again, I didn't use random patterns, but used ones that maximized contention. I didn't calculate an absolute rate of communication. I measured the relative rate of communication with and without contention. Using my worst case pattern, I could see a factor of four increase across a 16x16 mesh of nodes using messages of a few hundred bytes. Also, I saturated the mesh with lots of messages. One quick burst would probably complete quickly. These times were gathered on the Intel Touchstone Delta which is a predecessor of the Paragon. The effects I noticed will probably differ more because of the change in operating system than by hardware changes on the Paragon. I hope to soon get access to a Paragon to measure the differences. Let me know if you want more information. I'm working up a paper on my data, but it will not be ready for a few weeks at least. I am gathering some measurements while running a particle Physics application (ab initio Carbon clustering). -rich enbody@cps.msu.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ebert@cs.umbc.edu (Dr. David Ebert) Subject: CFP: 1994 Symposium on Volume Visualization Organization: U. Maryland Baltimore County Computer Science Dept. 1994 Symposium on Volume Visualization October 17-18, 1994 Washington, DC Call for Participation Following our three successful meetings (the Chapel Hill '89, San Diego '90, and Boston '92 Workshops on Volume Visualization), this fourth meeting will provide the opportunity for demonstrations of new developments in this evolving area. Scientists from all disciplines involved in the visual presentation and interpretation of volumetric data are invited to both submit and attend this Symposium. The Symposium is sponsored by ACM-SIGGRAPH and the IEEE Computer Society Technical Committee on Computer Graphics. This Workshop will take place during the week of October 17-21, 1994 at the Sheraton Premiere at Tyson Center Hotel in Washington DC area, in conjunction with the Visualization '94 Conference. Six copies of original material should be submitted to the program co-chairs on or before March 31, 1994. Authors from North America are asked to submit their papers to Arie Kaufman. All others are to submit their papers to Wolfgang Krueger. Suggested topics include, but are not limited to: * Volume visualization of unstructured and irregular grids. * Parallel and distributed volume visualization. * Hardware and software systems. * Validation and control of rendering quality. * Volume segmentation and analysis. * Management, storage, and rendering of large datasets. * User interfacing to volume visualization systems. * Acceleration techniques for volume rendering. * Fusion and visualization of multimodal and multidimensional data. * Visualization of non-scalar volumetric information. * Modeling and realistic rendering with volumes. * Discipline-specific application of volume visualization. Papers should be limited to 5,000 words and may be accompanied by an NTSC video (6 copies, please). The accepted papers will appear in the Symposium Proceeding that will be published by ACM/SIGGRAPH and will be distributed to all SIGGRAPH Member "Plus". Program Co-chairs: Arie Kaufman Wolfgang Krueger Computer Science Department Dept. of Scientific Visualization, GMD-HLRZ State University of New York P.O. Box 1316, Schloss Birlinghoven Stony Brook, NY 11794-4400 D-5205 Sankt Augustin 1 GERMANY Telephone: 516-632-8441/8428 Telephone: +49 (2241) 14-2367 Fax: 516-632-8334 Fax: +49 (2241) 14-2040 Email: ari@cs.sunysb.edu Email: krueger@viswiz.gmd.de Symposium Co-chairs: Roni Yagel Holly Rushmeier Dept. of Computer Science Rm. B-146, Bldg. 225 The Ohio State University NIST 2036 Neil Av. Columbus, OH 43210 Gaithersburg, MD 20899 Telephone: 614-292-0060 Telephone: 301-975-3918 Fax: 614-292-2911 Fax: 301-963-9137 Email: yagel@cis.ohio-state.edu Email: holly@cam.nist.gov Program Committee: Nick England - University of North Carolina, Chapel Hill Pat Hanrahan - Princeton University Marc Levoy - Stanford University Bill Lorensen - General Electric Co. Nelson Max - Lawrence Livermore National Labs Greg Nielson - Arizona State University Sam Uselton - CS Corp - NASA Ames Jane Wilhelms - University of California at Santa Cruz Symposium Committee: David Ebert - University of Maryland, Baltimore County Todd Elvins - San Diego Supercomputer Center Larry Gelberg - AVS -- -- Dr. David S. Ebert, Computer Science Department, University of Maryland, -- --- Baltimore County; 5401 Wilkens Ave., Baltimore, MD USA 21228-5398 ------- ------ ebert@cs.umbc.edu or ..!{att,pyramid,killer}!cs.umbc.edu!ebert -------- ------------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: Generating a random number for a shape with C* for the CM2 Organization: Professional Student, University of Maryland, College Park References: <1993Dec14.140837.21646@hubcap.clemson.edu> In article <1993Dec14.140837.21646@hubcap.clemson.edu>, lancer@cs.montana.edu (Lance Kind) writes: > shape = nrand48(shape); <- where the shape input for the seed has unique > numbers. > Huh?? How about something like: (in CM-2 C*) shape [X][Y] my_shape; with (my_shape) { int:current my_rand; psrand(time(0)); my_rand = prand(); } -david David A. Bader Electrical Engineering Department A.V. Williams Building University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.lang.fortran,comp.parallel From: forge@netcom.com (FORGE Customer Support) Subject: FORTRAN PARALLELIZATION WRKSHOP 1-3FEB94 Keywords: PARALLEL WORKSHOP FORTRAN Organization: Applied Parallel Research, Inc. Date: Tue, 14 Dec 1993 17:24:15 GMT =============================================================== APR Applied Parallel Research, Inc. Workshop 1-3 February 94 =============================================================== PARALLEL PROCESSING IN FORTRAN -- WORKSHOP Placerville, CA 1-3 February 1994 APR announces a three-day workshop on parallel processing techniques in Fortran, and the use of APR's FORGE parallelization tools. The instructors will be Gene Wagenbreth and John Levesque, Applied Parallel Research, Inc. Each day of the workshop includes time for individual and group "hands-on" practice with APR's FORGE tools. Participants are encouraged to bring their own programs to work on. This workshop will also present APR's new batch tools, dpf and xhpf, that have the capability of automatically parallelizing real Fortran programs for distributed memory systems. OUTLINE: Day 1: AM: Intro to Parallel Processing o Parallel architectures - SIMD & MIMD o Memory architectures - Shared, Distributed, Multi-level o Programming paradigms - Domain decomposition, SPMD o Language issues - Fortran 77, 90, High Performance Fortran o Performance measurement - profiling tools, parallel simulation PM: Intro to FORGE 90 o Overview o Source code browser o Instrumenting serial programs o Workshop using FORGE 90 Day 2: AM: Parallelizing for Distributed Memory using FORGE 90 (DMP and dpf) o Data decomposition o Loop distribution o Using APR Directives in Fortran 77 Programs - dpf o Using AutoMAGIC parallelization within dpf and xHPF o The programming model - SPMD paradigm o Parallel Simulator o Parallelization inhibitors/prohibitors o Efficiency of transformations o Problems and work-arounds PM: Open Workshop using FORGE 90 DMP Day 3: AM: FORGE 90's High Performance Fortran Products - xhpf o Overview o HPF Data Distribution Directives o Using HPF directives in Fortran 77 programs - xhpf o Using HPF directives in Fortran 90 programs - xhpf o Investigation of Parallelization Results using FORGE 90 DMP o Using the Parallel Profiler with xhpf PM: Open Workshop using FORGE 90 DMP, dpf and xhpf modules over IBM RS6K and HP/9000 workstations using PVM. Bring your own codes to work with on cartridge tape. FTP access is available from our network. ------------------------------------------------------------------------- Registration fee is $1000 ( $800 for FORGE 90 customers), and includes materials and access to workstations running FORGE 90 and PVM. Location is at the offices of Applied Parallel Research in Placerville, California, (45 miles east of Sacramento, near Lake Tahoe). Classes run 9am to 5pm. Accommodations at Best Western Motel in Placerville can be arranged through our office. Contact: Applied Parallel Research, Inc., 550 Main St., Placerville, CA 95667 Voice: 916/621-1600. Fax: -0593. Email: forge@netcom.com ============================================================================== -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pm@bonito.icase.edu (Mehrotra Piyush) Subject: Research Positions at ICASE, Hampton VA Date: 14 Dec 1993 18:00:39 GMT Organization: ICASE/NASA Langley Research Positions at ICASE The Institute for Computer Applications in Science and Engineering (ICASE) is seeking fresh PhDs for staff scientist positions in the following areas: a) systems software for parallel computers, b) performance and reliability analysis, and c) parallel numerical algorithms. The principal focus of the software research effort at ICASE is development of tools and environments for porting large scale scientific applications to parallel and distributed systems. The focus of the performance and reliability analysis research is development of algorithms and tools for the study and optimization of performance of complex computer systems, especially parallel and distributed systems. The focus in parallel numerical algorithms is the development and experimental investigation of scalable methods for computational fluid dynamics applications. In the software area we are looking for PhDs interested in collaborative research on runtime support systems, on compiler design and enhancements, on tools for distribution, mapping, and load balancing, and on tools for performance monitoring and prediction. In the performance and reliability analysis area we seek PhDs interested in tools and algorithms for high performance simulation, and for parallel mathematical performance and relability analysis. Current topics of interest in the algorithms area are multilevel iterative methods, domain decomposition iterative methods, problem decomposition and parallel mapping in the presence of adaptivity, and multidisciplinary optimization. Staff scientists appointments are usually made for two years, with the possibility of a third-year extension. ICASE is a non-profit research organization located at the NASA Langley Research Center in Hampton, Virginia. The institute offers excellent opportunities to computer science researchers for collaboration on complex and computationally intensive problems of interest to NASA. ICASE staff scientists have access to Langley's 66 processor Intel PARAGON, a Cray Y/MP, and internet access to many other parallel architectures. US citizens/permanent residents will be given *strong* preference. Please send resumes to: Director ICASE, MS 132C NASA Langley Research Center Hampton VA 23681 or by e-mail to positions@icase.edu -- - Piyush Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pc94@cscs.ch Subject: PC94 Second Announcement Keywords: High Performance Computing Sender: pc94@cscs.ch Organization: Centro Svizzero di Calcolo Scientifico, CH-6928 Manno Date: Mon, 13 Dec 1993 10:56:38 GMT --------------------------------------------------------------------------- PC'94 (PHYSICS COMPUTING '94) The 6th Joint EPS-APS International Conference on Physics Computing Lugano, Switzerland, Palazzo dei Congressi August 22-26, 1994 (Second Announcement) --------------------------------------------------------------------------- Jointly organized by the Swiss Scientific Computing Center (CSCS), Manno, Switzerland, belonging to the Swiss Federal Institute of Technology, Zurich (ETHZ), the EPS Interdisciplinary Group on Computational Physics and by the American Physical Society, Division of Computational Physics. Sponsored by EPS-APS, ETHZ, Computers in Physics, NEC Scientific Program --------------------------------------------------------------------------- PC'94 will give a detailed overview on the newest research results and developments in computational methodology for academia and industry, including invited papers, contributed papers, poster sessions, tutorials and vendor exhibitions. The purpose of the conference is to bring together researchers interested in innovative approaches in computational physics. Special emphasis will be given to algorithmical and high- performance computer implementation issues. Tutorial sessions organized by leaders in their respective fields will be held on the first conference day. Invited Speakers and their presentation topics (August 23-26, 94) --------------------------------------------------------------------------- Molecular Dynamics W. Andreoni, IBM Research Centre, Ruschlikon, CH L. Colombo, University of Milan, I V. Stepanyuk, Lomonosov State Univ. Moscow, Russia Physics Education C. Rebbi, Physics Dept., Boston Univ., USA Chaos and Dynamical Systems V. Demmel, MPI fur Extraterrestrische Physik, Garching, D Mesoscopic Physics H. deRaedt, Univ. of Groningen, NL E. Kaxiras, Dept. of Physics, Harward Univ., USA Electronic Structure of Molecules and Materials L. Ixaru, Inst. of Physics, Univ. of Bucharest, R You Zhou, Physics Inst., Univ. of Zurich, CH Plasmas U. Schwenn, MPI fur Plasma Physik, Garching, D Fluids and Turbulence T. Poinsot, CERFACS, Toulouse, F S. Lanteri, Inria, F Climate Modeling J. Hunt, Weather Office, Bracknell, GB Parallel Computation in Physics K. Bowler, Dept. of Physics, Univ. of Edinburgh, GB G. Meurant, CEA, Dept. Mathematiques Appliquees, F M.C.Payne, Dept. of Physics, Univ. of Cambridge, GB Monte Carlo and Critical Phenomena K. Binder, Physics Inst., Johannes Gutenberg-Univ., Mainz, D I. Morgenstern, Physics Inst., Univ. of Regensburg, D Industrial Applications B. Larrouturou INRIA, Sophia-Antipolis, F P. Sguazzero IBM ECSEC Center, Rome, I Self-Organization and Growth Phenomena M. Peyrard, Faculty of Science, Univ. of Lyon, F J. Kertesz, Bit, Budapest, H Cellular Automata and Lattice Gases B.Chopard, Parallel Computing Group, Univ. of Geneva, CH Numerical Methods I.M.Sobol, Inst. Math. Modelling, Moscow, Russia S. Obara, Dept. of Chemistry, Kyoto Univ., J Tutorials (Monday, August 22, 1994) --------------------------------------------------------------------------- 1 - Parallel Computation (1 day) 2 - Visualization Tools and Techniques (1 day) 3a - Introduction to Finite Elements Methods (1/2 day) 3b - Structured and Unstructured Adaptive Grids (1/2 day) 4a - Wavelets and Fourier Analysis (1/2 day) 4b - Electronic Structure Calculations (1/2 day) 5a - Introduction to Neural Networks (1/2 day) 5b - Symbolic Computation (1/2 day) Paper Submission (before February 28, 1994) --------------------------------------------------------------------------- Original Papers should be limited to 4 pages and submitted before February, 28, 1994. Formatting details will be given in due time. Please submit 4 copies of all materials. Accepted papers will be included as posters or as oral presentations. Papers will only be accepted if at least one of the authors has registered. In contrast with the arrangements at previous conferences in the Physics Computing series, the papers will be printed prior to PC'94 in a proceedings volume, that will be distributed to participants at the beginning of the conference. Registration --------------------------------------------------------------------------- Registration fee: (1) EPS member and affiliated societies ONLY: I am sending Sfr. 370.- (Sfr. 420.- after May 1994) to the account (*). (2) I am sending Sfr. 400.- (Sfr. 450.- after May 1994) to the account (*). (*) Account: "PC'94" Nr. JE-150.761.0 Swiss Bank Corporation (SBS) CH-6982 Agno, Switzerland At the conference there is the possibility to buy daily tickets at Sfr. 200.- /day. The registration fee covers the access to the conference and the proceedings book of the conference. Refund of the registration fee (less 10% administration costs) will only be granted if notification of cancellation has reached the conference secretariat before August 1, 1994. Lunches: Sfr. 25.- /day, payable at the conference Tutorials: (Students 50%) Sfr. 200.- /half day Sfr. 300.- /day A tutorial takes place only if at least 10 participants have registered. Registration Form ------------------------------------------------------ >-8 ---------------- Family Name: ____________________________________________________________ First Name: _____________________________________________________________ Organization: ___________________________________________________________ Mailing Address: ________________________________________________________ ZIP/City/State _________________________________________________________ Country: ________________________________________________________________ Phone: ____________________________ Fax: ________________________________ Email: __________________________________________________________________ Tutorials: 1 ___ 2 ___ 3a ___ 3b ___ 4a ___ 4b ___ 5a ___ 5b __ Date:______________________ Signature:___________________________________ ---- 8-< ------------------------------------------------------------------- PC '94 Scientific Advisory Committee --------------------------------------------------------------------------- Dr. Ralf Gruber (chairman) CSCS, Manno, CH Dr. Robert Allan SERC, Daresbury, GB Dr. David Anderson Univ. of California, Livermore, USA Prof. Alfonso Baldereschi EPFL, Lausanne, CH Dr. Ernesto Bonomi CSR4, Cagliari, I Prof. Roberto Car Univ. of Geneva, CH Prof. Robert De Groot Univ. of Nijemegen, NL Prof. Lev Degtyarev Keldysh Institute, Moscow, Russia Prof. Geerd Diercksen Max-Planck-Institut, Gerching bei Munchen, D Prof. Wolfgang Fichtner ETH Zentrum, Zurich, CH Prof. Roland Glowinski CERFACS, Toulouse, F Dr. Frederick James CERN, Geneva, CH Dr. Richard Kenway Edinburgh Univ., GB Prof. Peter Meier Univ. of Zurich, CH Dr. Jaroslav Nadrchal Academy of Sciences, Praha, CZ Prof. Risto Nieminen Helsinki University, Espoo, Finland Dr. Elaine Oran Naval Research Lab., Washington DC, USA Dr. Jacques Periaux Dassault Aviation, Saint-Cloud, F Dr. Adnan Podoleanu Polytechnical Institute of Bucharest, Russia Dr. Sauro Succi IBM ECSEC, Roma, I Dr. Toshikazu Takada NEC Corp. Tsukuba, Ibaraki, J Dr. Marco Tomassini CSCS, Manno, CH Prof. Tanias Vicsek Eotvos University, Budapest, H Hotel Information --------------------------------------------------------------------------- Hotel reservations may be arranged through the Palazzo dei Congressi in Lugano (Tel: +41/91/21 4774, Fax: +41/91/22 0323). Early registration is advisable in order to arrange proper accommodation. Social Programme --------------------------------------------------------------------------- A social programme is in preparation. An accompanying person's programme will be organized at the conference. Additional Informations --------------------------------------------------------------------------- The final program of the conference will be sent in June 1994. Inquiries about the Conference. PC'94 Centro Svizzero di Calcolo Scientifico CH-6928 Manno (Switzerland) Tel: +41/91/508211 Fax: +41/91/506711 E-mail: pc94@cscs.ch Official Carrier --------------------------------------------------------------------------- Swissair Crossair Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: joel@SSD.intel.com (Joel Clark) Subject: Re: asynchronous IO Organization: Supercomputer Systems Division (SSD), Intel References: <1993Dec14.140641.21142@hubcap.clemson.edu> Date: Tue, 14 Dec 1993 18:56:04 GMT Apparently-To: comp-parallel@uunet.uu.net In article <1993Dec14.140641.21142@hubcap.clemson.edu> mhchoy@ucsb.edu (Manhoi Choy) writes: >I am trying to find a standard on message passing interface that supports >asynchronous IO. e.g I would like to be able to set up interrupt routines >to handle messages and be able to send out messages asynchronously. >Existing tools such as PVM or P4 do not seem to support this. (Correct me >if I am wrong.) Is there a reason why asynchronous IO is not supported? >Are there anyone trying to include asynchronous IO in their "standard"? Email to mhchoy@ucsb.edu bounced off of ucsb.edu so I will post: I believe the MPI (Message Passing Interface) includes this. I think the listserver at ornl.gov has more info on MPI Also there have been articles on comp.parallel and comp.sys.super in the last week or two on MPI. Specifically an annoucement of a European conference to review the current proposed MPI standard. Intel MPP systems have had asynchronous message passing since the days of the iPSC/2 (1988) (although the asynchronuous interface was frequently problematic on the iPSC/860, one customer wrote a complete transation processing interface supporting dozens of users and 60 disks where ever action was a result of an asynchronous message). joel Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wieckows@peca.cs.umn.edu (Zbigniew Wieckowski) Subject: Re: asynchronous IO Organization: University of Minnesota, Minneapolis, CSci dept. References: <1993Dec14.140641.21142@hubcap.clemson.edu> In article <1993Dec14.140641.21142@hubcap.clemson.edu> mhchoy@ucsb.edu (Manhoi Choy) writes: >I am trying to find a standard on message passing interface that supports >asynchronous IO. e.g I would like to be able to set up interrupt routines >to handle messages and be able to send out messages asynchronously. >Existing tools such as PVM or P4 do not seem to support this. (Correct me >if I am wrong.) Is there a reason why asynchronous IO is not supported? >Are there anyone trying to include asynchronous IO in their "standard"? New version of P4 released this summer presumably supports asynchronous I/O. This stuff seems to difficult to debug (signals). Messy. People prefer threads. Where do you get threads? ??? Bishak ----------------------------------------------------------------------------- Zbigniew Wieckowski, Department of Computer Science, University of Minnesota, 200 Union St. SE, MN 55455, U.S.A., (612)626-7510, e-mail:wieckows@cs.umn.edu ----------------------------------------------------------------------------- What is mind? No matter. What is matter? Never mind. ----------------------------------------------------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on TMC CM-5, Intel iPSC/860, Intel Paragon Organization: NAS/NASA-Ames Research Center J. Eric Townsend (jet@nas.nasa.gov) last updated: 29 Nov 1993 (updated mailing addresses) This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-managers -- discussion of administrating the TMC CM-5 cm5-users -- " " using the TMC CM-5 ipsc-managers -- " " administrating the Intel iPSC/860 ipsc-users -- " " using the Intel iPSC/860 paragon-managers -- " " administrating the Intel Paragon paragon-users -- " " using the Intel Paragon The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: roland@sics.se (Roland Karlsson) Subject: Re: Sequent parallel programming reference needed Organization: Swedish Institute of Computer Science, Kista References: I got a book from Sequent Sweden some weeks ago. It is called "Guide to Parallel Programming on Sequent Computer Systems - Third Edition". Contact some Sequent dealer and I suppose you can get one for free. Otherwise, it is printed by PRENTICE HALL, Englewood Cliffs, New Jersey 07632 (ISBN 0-13-370777-8). -- Roland Karlsson SICS, PO Box 1263, S-164 28 KISTA, SWEDEN Internet: roland@sics.se Tel: +46 8 752 15 40 Fax: +46 8 751 72 30 Telex: 812 6154 7011 SICS Ttx: 2401-812 6154 7011=SICS Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Path: penttone From: penttone@cs.joensuu.fi (Martti Penttonen) Subject: school on parallel computing Organization: University of Joensuu Date: Wed, 15 Dec 1993 13:49:05 GMT Apparently-To: comp-parallel@eunet.fi WINTER SCHOOL ON PARALLEL COMPUTING and Finnish symposium on computer science The winter school of the Finnish Society for Computer Science is organized in 1994 by the Department of Computer Science of the University of Joensuu. The topic of the school is PARALLEL ALGORITHMS AND PARALLEL COMPUTERS The program of the school consists of * Bill McColl (Oxford): "Parallel algorithms and architectures" * Friedelm Meyer auf der Heide (Paderborn): "Simulations of shared memory" * Friedemann Mattern (Saarbrucken): "Distributed algorithms" six hours each. In combination with the winter school, there is a national CONFERENCE ON COMPUTER SCIENCE The program of the conference and the contents of the invited talks follow. The time of the school is January 10 to 12, 1994, and place Siikasalmi Research Station in Liperi (close to Joensuu), in Finland. There is no registration fee, but participants should pay their travel and accommodation. Accommodation at the Research station costs 260 FIM in double room (300 FIM in single room, availability limited) for two nights, meals included. 1 USD = 5.8 FIM. Participation is welcome, but because of space constraints, only a few more participants can be accepted For more information, contact Martti Penttonen Department of Computer Science penttonen@cs.joensuu.fi University of Joensuu Tel: +358 73 151 3105 P.O.Box 111 FAX: +358 73 151 3290 80101 Joensuu, Finland ======================================================================== TENTATIVE SCHEDULE Monday, January 10, 1994 8:00 bus from Hotel Kimmel to Siikasalmi 9:00-10:45 Bill McColl: "Parallel algorithms and architectures" 11:00-12:00 lunch break 12:00-13:45 Meyer auf der Heide: "Simulations of shared memory" 13:35-14:00 coffee break 14:00-15:45 Mattern: "Distributed algorithms" 16:00-16:20 Juha K{rkk{inen, Esko Ukkonen (University of Helsinki): Two and higher dimensional pattern matching in optimal expected time 16:20-16:40 Niklas Holsti, Erkki Sutinen (University of Helsinki): Approximate string matching using q-gram places 16:40-17:00 Erkki Sutinen (University of Helsinki): On average-case behaviour of the q-gram method 17:00-17:20 Tomi Janhunen (Helsinki University of Technology): Cautious autoepistemic reasoning applied to general logic programs 17:20-17:40 Jussi Rintanen (Helsinki University of Technology): Approaches to priorities in default reasoning 17:40-18:00 Kari Gran| (Nokia Research Center), Jukka Paakki (University of Jyv{skyl{): Specifying communication protocols in PPL 18:00-19:00 dinner Tuesday, January 11 8:00- 9:00 breakfast 9:00-10:45 Bill McColl: "Parallel algorithms and architectures" 11:00-12:00 lunch break 12:00-13:45 Meyer auf der Heide: "Simulations of shared memory" 13:35-14:00 coffee break 14:00-15:45 Mattern: "Distributed algorithms" 16:00-16:20 Ville Lepp{nen (University of Turku), Martti Penttonen (University of Joensuu): Work-optimal simulation of PRAM models on meshes 16:20-16:40 Simo Juvaste (University of Joensuu): A note on simulating PRAMs by sparse microprocessor networks 16:40-17:00 Juha Hakkarainen (University of Joensuu): Distributed knowledge representation in sparse distributed memory 17:00-17:20 Jyrki Katajainen (University of Copenhagen), Tomi Pasanen, Jukka Teuhola (University of Turku): In-place mergesort 17:20-17:40 Heikki K{lvi{inen (Lappeenranta University of Technology): Randomized Hough transform (RHT): New extensions to line detection 17:40-18:00 Merja Wanne (University of Wasa): On constructing area unions in computer cartography 18:00-19:00 dinner Wednesday, January 12 8:00- 9:00 breakfast 9:00-10:45 Bill McColl: "Parallel algorithms and architectures" 11:00-12:00 lunch break 12:00-13:45 Meyer auf der Heide: "Simulations of shared memory" 13:35-14:00 coffee break 14:00-15:30 Mattern: "Distributed algorithms" 15:30 bus to railway station and Hotel Kimmel ======================================================================== CONTENTS OF THE LECTURES Bill McColl: "Parallel Algorithms and Architectures". Lecture 1. Fast Parallel Boolean Circuits. Prefix sums, n-bit addition, n-bit multiplication. Lecture 2. Fast Parallel Comparison Networks. Merging, sorting, selection. Lecture 3. Systolic Algorithms and Architectures. Matrix multiplication, LU decomposition, algebraic path problem. Lecture 4. Interconnection Networks and Routing. Basic network properties: degree, diameter, bisection width, area. Routing h-relations. Fault tolerance and expanders. Optical communication. Lecture 5. The BSP Model. Basic properties. Simple BSP algorithm for matrix multiplication. Lecture 6. Design and Analysis of BSP Algorithms. BSP complexity theory. An optimal BSP algorithm for matrix multiplication. BSP algorithms for LU decomposition, solution of a triangular linear system. ----------------------------------------------------------------------- Friedhelm Meyer auf der Heide: "Simulations of Shared Memory" Lecture I: Routing, Hashing and Basic Simulations Lecture II: Fast Simulations with Redundant Memory Representation Lecture III: Dictionaries and Time-Processor Optimal Simulations Lecture I. We consider hashing strategies for simulating shared memory, i.e. for simulations of PRAM on a distributed memory machine, DMMs. A DMM consists of n processors and $n$ memory modules, connected by a router. The access to memory modules is restricted in so far that only one request to a module can be processed in unit time. We present shared memory simulations based on the method to distributed the shared memory among the modules using a hash function drawn from a universal class of hash functions. For this purpose we describe high performance universal classes of hash functions and fast routers to achieve simulations with expected delay O(log (n)). Lecture II. We consider several simulation techniques which use redundant memory representation, i.e. simulate each shared memory cell in several modules. The DMM now is supposed to have a perfect router. We present a deterministic simulation as well as randomized simulations. The latter has expected delay O(log log (n)) only, and runs on a restricted version of DMMs based on optical crossbar communication. Lecture III. Based on techniques from the previous lectures we now look for time-processor optimal simulations, i.e. Simulations of nt-processor PRAMs on n-processor DMM with expected delay O(t). For this purpose we describe techniques to maintain a parallel dictionary (i.e. date structure that supports parallel insertions, deletions and look ups) on a DMM. The slackness parameter t can be chosen as log (n), and even as small as log log (n) log* (n) for more involved simulations. ---------------------------------------------------------------------- Friedemann Mattern Distributed Algorithms In distributed systems, where processes communicate solely by messages, no process has complete and up to date knowledge of the global state. Hence, control problems such as detection of objects which are no longer accessible (i.e., garbage objects) or determination of a causally consistent view of the global system state are more difficult to solve in a distributed environment than in sequential systems. Another prominent example is termination detection. A distributed computation is terminated when all processes are passive and no message is in transit. However, since a passive process may be reactivated when a message is received, and since in general it is impossible to inspect all processes at the same time, the detection of termination of a distributed computation is non-trivial. Distributed termination detection is a "prototype problem"; research on it has contributed much to the entire field of distributed algorithms. It is closely connected to other important problems such as determining a causally consistent global state, approximating a distributed monotonic function (e.g., Global Virtual Time in distributed simulations), or distributed garbage collection. We explain these problems, discuss their importance, and show how they are related to each other and how solutions to these problems can be obtained from generalizing solutions to the termination detection problem. We also give a short introduction into the concept of virtual time logical clocks, in particular so-called vector clocks which represent the causality structure of distributed computations in an isomorphic way. We then show how this non-standard model of time is in close analogy to Minkowski's relativistic spacetime. (Among other things light cones and the causality preserving Lorentz transformations have their counterparts in distributed computations.) We introduce observers as linear sequences of events and show how faithful (i.e., causally consistent) observers can be implemented using vector clocks. Existing in a relativistic world, different faithful observers may perceive the same distributed computation differently. This fact induces interesting and non-trivial problems for detecting the truth of global predicates and for monitoring or debugging distributed systems. The topics we discuss are important from a practical as well as from a theoretical point of view. For example, distributed garbage collection algorithms are gaining much interest because of current efforts to efficiently implement object-oriented languages on parallel distributed memory machines. Also, obtaining a causally consistent image of a distributed computation without freezing it is important for debugging and monitoring purposes. On the other hand, the problem of detecting a global property of a distributed system has contributed much to a better understanding of fundamental concepts and theoretical aspects of distributed computations. ======================================================================== -- --------------------------------------------------------------------- Martti Penttonen Department of Computer Science Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ralf.Wirth@arbi.informatik.uni-oldenburg.de (Ralf Wirth) Subject: Status of Suprenum Project Organization: University of Oldenburg, Germany Date: Wed, 15 Dec 1993 14:43:51 GMT Hi there! Does anybody know the current state of the SUPRENUM-Project? Was it really cancelled? Also, do you know the costs of the project? I am interested in data about the SUPERB Fortran Parallizer for that machine. Are there any news about it after the Paper "Superb: Experiences and Future Research", 1992, or may it be cancelled as well? Thanks in advance. CU. Ralf. -- ***************************************************************************** * * * Name: Ralf Wirth * * Job : Student of computer science/Wizard of Nightfall (Veltins). * * Login : Henry@Aragorn, Henry@Schrottpollo * * E-Mail Address: Ralf.Wirth@arbi.informatik.uni-oldenburg.de * * ------------------------------------------------------------------------- * * No, I'm not related to him. * ***************************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: igl@ecs.soton.ac.uk (Ian Glendinning) Subject: Re: asynchronous IO Date: 15 Dec 93 16:10:31 GMT Organization: Electronics and Computer Science, University of Southampton References: <1993Dec14.140641.21142@hubcap.clemson.edu> <1993Dec15.144124.27510@hubcap.clemson.edu> In <1993Dec15.144124.27510@hubcap.clemson.edu> joel@SSD.intel.com (Joel Clark) writes: >In article <1993Dec14.140641.21142@hubcap.clemson.edu> mhchoy@ucsb.edu (Manhoi Choy) writes: >>I am trying to find a standard on message passing interface that supports >>asynchronous IO. e.g I would like to be able to set up interrupt routines >>to handle messages and be able to send out messages asynchronously. >>Existing tools such as PVM or P4 do not seem to support this. (Correct me >>if I am wrong.) Is there a reason why asynchronous IO is not supported? >>Are there anyone trying to include asynchronous IO in their "standard"? >Email to mhchoy@ucsb.edu bounced off of ucsb.edu so I will post: >I believe the MPI (Message Passing Interface) includes this. If you mean asynchronous send/receive of messages, then yes, MPI does support it. MPI does not however say anything about parallel file I/O. >I think the listserver at ornl.gov has more info on MPI Send "send index from mpi" to netlib@ornl.gov for more information. >Also there have been articles on comp.parallel and comp.sys.super in the >last week or two on MPI. Specifically an annoucement of a European >conference to review the current proposed MPI standard. For more information about the meeting, which will be held January 17-18 at INRIA, Sophia Antipolis, France, contact mpi-ws@pallas-gmbh.de. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: No posting tomorrow. One of my offspring graduates tomorrow! Will try to post on Friday. =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbenz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel To: comp-parallel@uunet.UU.NET Path: hilbert!george From: george@hilbert.coe.northeastern.edu (George Kechriotis) Newsgroups: Comp.parallel Subject: mesh transpose anyone? Organization: CDSP Computing Lab, Northeastern University Hi, my problrm is as follows: A 2D matrix is stored columnwise in a number of processors arranged in a mesh topology. Each processor stores some of the column of the data array. I would like to do a transposition of this array, i.e each of the processors should store some of the rows of the matrix... I know of efficient algorithms for the hypercube but what happens on the mesh? [The question could be rephrased as: iPSC/860 vs. Paragon? ] thanks in advance george Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: cel@theory.lcs.mit.edu (Charles E. Leiserson) Newsgroups: comp.parallel,comp.arch,comp.theory Subject: SPAA'94 Call for Papers -- Deadline Jan. 21, 1994 Date: 15 Dec 93 13:53:26 Organization: MIT Lab for Computer Science Nntp-Posting-Host: larry.lcs.mit.edu SPAA'94 CALL FOR PAPERS Sixth Annual ACM Symposium on PARALLEL ALGORITHMS AND ARCHITECTURES JUNE 27-29, 1994 Cape May, New Jersey The Sixth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA'94) will be held in Cape May, New Jersey, on June 27-29, 1994. It is sponsored by the ACM Special Interest Groups for Automata and Computability Theory (SIGACT) and Computer Architecture (SIGARCH) and organized in cooperation with the European Association for Theoretical Computer Science (EATCS). CONTRIBUTED PAPERS: Contributed papers are sought that present original, fundamental advances in parallel algorithms and architectures, whether analytical or experimental, theoretical or practical. A major goal of SPAA is to foster communication and cooperation among the diverse communities involved in parallel algorithms and architectures, including those involved in operating systems, languages, and applications. The Symposium especially encourages contributed papers that offer novel architectural mechanisms or conceptual advances in parallel architectures, algorithmic work that exploits or embodies architectural features of parallel machines, and software or applications that emphasize architectural or algorithmic ideas. VENDOR PRESENTATIONS: As in previous years, the Symposium will devote a subset of the presentations to technical material describing commercially available systems. Papers are solicited describing concepts, implementations or performance of commercially available parallel computers, routers, or software packages containing novel algorithms. Papers should not be sales literature, but rather research-quality descriptions of production or prototype systems. Papers that address the interaction between architecture and algorithms are especially encouraged. SUBMISSIONS: Authors are invited to send draft papers to: Charles E. Leiserson, SPAA'94 Program Chair MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 USA The deadline for submissions is JANUARY 21, 1994. Simultaneous submission of the same research to SPAA and to another conference with proceedings is not allowed. Inquiries should be addressed to Ms. Cheryl Patton (phone: 617-253-2322; fax: 617-253-0415; e-mail: cap@mit.edu). FORMAT FOR SUBMISSIONS: Authors should submit 15 double-sided copies of a draft paper. The cover page should include (1) title, (2) authors and affiliation, (3) e-mail address of the contact author, and (4) a brief abstract describing the work. If the paper is to be considered as a vendor presentation, the words ``Vendor Presentation'' should appear at the top of the cover page. A technical exposition should follow on subsequent pages, and should include a comparison with previous work. The technical exposition should be directed toward a specialist, but it should include an introduction understandable to a nonspecialist that describes the problem studied and the results achieved, focusing on the important ideas and their significance. The draft paper--excluding cover page, figures, and references--should not exceed 10 printed pages in 11-point type or larger. More details may be supplied in a clearly marked appendix which may be read at the discretion of the Program Committee. Any paper deviating significantly from these guidelines--or which is not received by the January 21, 1994 deadline--risks rejection without consideration of its merits. ACCEPTANCE: Authors will be notified of acceptance or rejection by a letter mailed by March 15, 1994. A final copy of each accepted paper, prepared according to ACM guidelines, must be received by the Program Chair by April 8, 1994. It is expected that every accepted paper will be presented at the Symposium, which features no parallel sessions. CONFERENCE CHAIR: Lawrence Snyder, U. Washington. LOCAL ARRANGEMENTS CHAIR: Satish Rao and Yu-dauh Lyuu, NEC Research Institute. PROGRAM COMMITTEE: Gianfranco Bilardi (U. Padova, Italy), Tom Blank (MasPar), Guy Blelloch (Carnegie Mellon), David Culler (U. California, Berkeley), Robert Cypher (IBM, Almaden), Steve Frank (Kendall Square Research), Torben Hagerup (Max Planck Institute, Germany), Charles E. Leiserson, Chairman (MIT), Trevor N. Mudge (U. Michigan, Ann Arbor), Cynthia A. Phillips (Sandia National Laboratories), Steve Oberlin (Cray Research), C. Gregory Plaxton (U. Texas, Austin), Rob Schreiber (RIACS). -- Cheers, Charles Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: comp.parallel From: ariasmig@gel.ulaval.ca (Miguel Arias) Subject: Help: Computational Sensors for Vision Reply-To: ariasmig@gel.ulaval.ca Organization: Dept. Genie Electrique, Universite Laval Hello, I am searching for any information regarding VLSI sensors for computer/robot vision with processing capabilities. I am very interested in references of: -Computational sensors -CMOS/BiCMOS image sensors with processing capabilities -Integrated focal plane processors -3D Sensors / rangefinders -Low level parallel processing for vision tasks -Other specialized imagers (motion, log-polar mapping, sub- pixel interpolation, etc.) -Specialized cameras with processing capabilities (edge extrac- tion, motion computation, image transformation). Please reply to my e-mail address, I'll post a summary. Thanks in advance. Miguel Arias e-mail: ariasmig@gel.ulaval.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tanner@nas.nasa.gov (Leigh Ann Tanner) Subject: Re: Intel Supercomputer Users' Group Meeting Sender: news@nas.nasa.gov (News Administrator) Nntp-Posting-Host: sundog.nas.nasa.gov Organization: NAS/NASA-Ames Research Center References: <1993Dec14.140827.21582@hubcap.clemson.edu> Date: Wed, 15 Dec 1993 20:21:53 GMT Apparently-To: comp-parallel@ames.arc.nasa.gov In article <1993Dec14.140827.21582@hubcap.clemson.edu>, tanner@nas.nasa.gov (Leigh Ann Tanner) writes: |> Mark Your Calendars!! |> |> The Intel Supercomputer Users' Group Meeting will be |> held January 26-29, 1994 in San Diego, California. |> Yes, I need a vacation!!! The dates are June 26-29 NOT January 26-29... Leigh Ann Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: harri906@crow.csrv.uidaho.edu (Harrington) Subject: Paralell Network Date: 16 Dec 1993 03:01:47 GMT Organization: University of Idaho, Moscow, Idaho Nntp-Posting-Host: crow.csrv.uidaho.edu X-Newsreader: TIN [version 1.1 PL8] Anyone know of a good paralell network for two pc's? I need one badly! Thanks, Merry Christmas Dan Harrington Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: 16 Dec 1993 12:08:48 GMT From: jim@meiko.co.uk (James Cownie) Subject: Re: asynchronous IO (message passing) Reply-To: jim@meiko.co.uk Organization: Meiko World. References: <1993Dec15.144124.27510@hubcap.clemson.edu> In article <1993Dec14.140641.21142@hubcap.clemson.edu> mhchoy@ucsb.edu (Manhoi Choy) writes: >I am trying to find a standard on message passing interface that supports >asynchronous IO. e.g I would like to be able to set up interrupt routines >to handle messages and be able to send out messages asynchronously. >Existing tools such as PVM or P4 do not seem to support this. (Correct me >if I am wrong.) Is there a reason why asynchronous IO is not supported? >Are there anyone trying to include asynchronous IO in their "standard"? You should carefully distinguish between 1) asynchronous (or non-blocking) message passing, in which you can start a message passing op (send or receive) and then later test or wait for its completion and 2) interrupt or signal driven message passing (probably only useful for a receive !) in which receipt of a particular message forces execution of a routine asynchronously to the execution of the users code. MPI fully supports 1, and provides no support for 2. There are many reasons not to support the interrupt style including :- 1) It's a horrible way to write code. It's like writing big chunks of stuff in signal handlers... 2) It's hard to implement (and in particular to get right !) 3) It's hard to specify (e.g. Can you communicate from within the message handler ? Can you probe for other messages here ? etc...) 4) all the other things I can't remember at the moment ! -- Jim --- James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ostr@acs3.bu.edu (boris ostrovsky) Subject: Threads package Date: 16 Dec 1993 17:25:28 GMT Organization: Boston University, Boston, MA, USA Nntp-Posting-Host: acs3.bu.edu Originator: ostr@acs3.bu.edu Hello, Could anyone help me finding a public domain threads package? It should preferably be able to run on RS/6000, SGI and SUN4. Thanks, Boris Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: sci.math.num-analysis,comp.parallel From: Herman.te.Riele@cwi.nl (Herman te Riele) Subject: Massively Parallel Computing and Applics, Call for Abstracts Sender: news@cwi.nl (The Daily Dross) Nntp-Posting-Host: steur.cwi.nl Organization: CWI, Amsterdam Date: Thu, 16 Dec 1993 17:43:43 GMT Apparently-To: comp-parallel@NL.net CWI - RUU SYMPOSIA ON MASSIVELY PARALLEL COMPUTING AND APPLICATIONS CALL FOR ABSTRACTS In 1993 - 1994, CWI (Centre for Mathematics and Computer Science Amsterdam) and RUU (University of Utrecht) are organising a series of Symposia on Massively Parallel Computing and Applications. As far as the computing part is concerned, we are interested in contributions on the optimization and analysis of generic numerical algorithms on massively parallel computers. In particular, we think of iterative methods for solving large sparse linear systems of equations and for finding eigenvalues and eigenvectors of large sparse linear systems, multigrid methods for various types of PDEs, parallel methods for the solution of ordinary differential equation, software tools for parallelization, etc. On the applications side, we seek contributions in fields where optimization and analysis of numeric and nonnumeric algorithms for massively parallel computers are instrumental for real progress. In particular, we think of environmental problems, number theory and cryptography, multiple-particle systems, chemical reactions, computational fluid dynamics, seismic problems etc. The following advisory board will assist in the selection of the Symposium programs: P. Aerts, Dutch National Computing Facilities Foundation NCF O. Axelsson, Catholic University Nijmegen L.O. Hertzberger, University of Amsterdam P.A.J. Hilbers, Royal Shell Laboratory Amsterdam P.J. van der Houwen, CWI and University of Amsterdam W. Loeve, National Aerospace Laboratory NLR N. Petkov, University of Groningen M. Rem, Technical University Eindhoven J.G. Verwer, CWI H. Wijshoff, University of Leiden P. De Wilde, Technical University Delft We intend to organise bi-monthly one-day meetings each of which will be centred around a class of numerical algorithms or around a coherent applications field. The first three meetings took place in 1993 and were devoted to: "Topics in environmental mathematics" (June 4, 1993), "Parallel numerical algorithms" (Sept. 24, 1993), "Computational number theory and cryptography" (Nov. 26, 1993). The next three meetings are scheduled in the first half of 1994, namely, Febr. 4, 1994, March 25, 1994, June 3, 1994. Refereed proceedings will be published. Abstracts of possible contributions are solicited now. Please send an abstract to Herman J.J. te Riele, CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands (email: herman@cwi.nl) and indicate your preference, if any, for one of the above three dates. The deadline for submission of abstracts is Jan. 14, 1994. Notification of acceptance for the first meeting will be sent by Jan. 21, 1994. and for the two subsequent meetings by Febr. 25, 1994. A limited budget is available for contributors from abroad to partially cover travel and lodging expenses. The organisers: H.J.J. te Riele (CWI) H.A. v.d. Vorst (RUU and CWI) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: bduncan@netcom.com (Bill Duncan) Subject: Experinces w/ Navigation Server (Sybase)? Keywords: sybase navigation parallel database Sender: netnews@netcom.com (USENET Administration) Reply-To: bduncan@netcom.com (Bill Duncan) Organization: NETCOM On-line Communication Services (408 241-9760 guest) X-Newsreader: InterCon TCP/Connect II 1.2 Date: Wed, 15 Dec 1993 10:34:25 GMT Apparently-To: comp-parallel@uunet.uu.net I'm interested in anyone who has had experience with Sybase's Navigation Server. We are a currently looking into Sybase as a database solution and are also interested in each db vendor's strategy in the parallel hw arena. Since Navigation Server is Sybase's proposed solution in this area, we've come to need some insight? Can anyone shed some light on this product? Bill Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: scp@doneyet.acl.lanl.gov (Stephen C. Pope) Subject: Announcement: CM International User Group Meeting Message-ID: Sender: news@newshost.lanl.gov Reply-To: Organization: /users/scp/.organization Date: Thu, 16 Dec 1993 21:03:41 GMT Attached please find a preliminary agenda and a registration form for the first Connection Machine International User Group Meeting. This meeting is being organized by Thinking Machines and Los Alamos National Laboratory and is open to all. To facilitate further mailings, please return an email copy of the registration form to: cm-users@acl.lanl.gov Non-LANL attendees must also send a hardcopy registration form along with payment. If you have questions, please do not hesitate to contact us at cm-users@acl.lanl.gov. We look forward to seeing you in Santa Fe! CM International User Group Meeting February 16-18, 1994 Eldorado Hotel, Santa Fe, NM PRELIMINARY AGENDA Please note that this is a preliminary agenda and is subject to change. Tuesday February 15, 1994 6:00pm - 9:30pm Registration and Reception, Eldorado Hotel Wednesday February 16, 1994 7:30am - 5:00pm Registration 9:00am - 10:00am Welcome and Keynote Speech Richard Fishman, President, Thinking Machines 10:00am - 10:30am Break 10:30am - 12:00pm Invited Talks: Scientific Applications I This session, along with the second session on Thursday, will focus on significant application domains and achievements in the realm of scientific computing. Titles and some speakers are not fully committed but the topic areas are indicated. Astrophysics, Mike Norman, National Center for Supercomputer Applications Molecular Dynamics, Peter Lomdahl, Los Alamos National Laboratory Material Science, TBD Meteorology, TBD Oil & Gas, TBD Computational Fluid Dynamics, TBD 12:00pm - 1:30pm Lunch (provided) 1:30pm - 3:00pm Panel Discussion: What is Success in MPP? Andy White, Los Alamos National Laboratory, will pose the question: ``What is Success in the MPP business, and how does Thinking Machines and the user community get there?'' to a distinguished panel including: Danny Hillis, Thinking Machines John Peterson, American Express Dennis O'Neill, Schlumberger Larry Smarr, National Center for Supercomputer Applications David Forslund, Los Alamos National Laboratory 3:00pm - 3:30pm Break 3:30pm - 5:00pm Invited Talks: Algorithms & Architecture This session will focus on developments in MPP architecture and parallel algorithms of general interest. Evolution of Architecture, Lew Tucker, Thinking Machines Parallel Rendering, Chuck Hansen, Los Alamos National Laboratory Algorithms, TBD 6:00pm - 9:30pm Reception and Entertainment at the New Mexico Museum of Fine Arts. Hosted by Thinking Machines. Thursday February 17, 1994 8:00am - 5:00pm Registration 8:30am - 10:00am Workshop: Communications and Programming Models A facilitated but open forum for discussion of issues concerning communications paradigms and programming models available on the CM, including CMMD, Active Messages, PVM, CMF and C* on the node, and CMF and C* global/local programming. Thinking Machines developers and experienced users will be on hand to share their insight. 10:30am - 12:00pm Invited Talks: Scientific Applications II (See "Scientific Applications I" above) 10:00am - 10:30am Break 12:00pm - 1:30pm Lunch (provided) 1:30pm - 3:00pm Parallel Sessions: 1) Workshop: System Administration Issues A facilitated but open forum for discussion of issues concerning the operation and administration of a CM, including access control and accounting, configuration of CMOST, SDA, HiPPI, and DJM. Thinking Machines personnel as well as experienced administrators from several large CM installations will be on hand to provide their insight. 2) Invited Talks: Information Processing The Business Supercomputing Group at Thinking Machines has big plans for the introduction of MPP technology into a market traditionally dominated by mainframe technology. Thinking Machines and some of their forward looking customers will present information on new products and applications for the CM. Speakers TBD. 3:00pm - 3:30pm Break 3:30pm - 5:00pm Parallel Sessions: 1) Workshop: Performance: Metrics and Reporting A facilitated but open forum for discussion of the problems with measuring and reporting on performance. As simple gigaflop numbers become increasingly suspect, establishing techniques and guidelines for reporting meaningful performance statistics is an issue to which the MPP community need address itself. 2) Workshop: Integrating the CM A facilitated but open forum for discussion of the fine art of exploiting the capabilities of a CM within a heterogeneous computing environment including everything from PCs to supercomputers. Networking, distributed computing, visualization, mass storage, and resource allocation are all likely topics. Among the participants will be Melissa Shearer of Mobil, Eric Townsend of NASA Ames Research Center, and the systems staff of the Advanced Computing Lab to share their experiences and future plans. Evening: Birds of a Feather sessions There's just not enough time for sessions on everything of interest in this diverse community. So space will be provided for impromptu gatherings organized by conference participants on topics of mutual interest. Friday February 18, 1994 8:00am - 10:00am Registration 8:30am - 10:00am Workshop: Performance: How to get there A facilitated but open forum for discussion of the craft of squeezing the utmost in performance out of CM codes. Participants are sure to include Gordon Bell Prize finalists, Thinking Machines performance gurus, and a host of others experienced with getting the most out of CMF, C*, and CDPEAC. 10:00am - 10:15am Break 10:15am - 11:45am Work in Progress Session This session will include a number of short presentations on interesting work in progress covering the entire spectrum of CM-related activities. If you are interested in making a presentation, please bring it to our attention. We will choose the most interesting proposals for brief (5-10min) presentations. 11:45am - 12:15pm Closing Remarks ------------------------------------------------------------------------------ CM International User Group Meeting February 16-18, 1994 Eldorado Hotel, Santa Fe, NM REGISTRATION FORM Name (for your badge): (Last, First, MI) Company: Work Address: City: State: Zip: Country: Email Address: Telephone (work): FAX: REGISTRATION FEES ----------------- Received on or before February 1, 1994: $150 Received after February 1, 1994: $200 Please indicate participation at the receptions: Tuesday, Feb. 16 o yes o no Wednesday, Feb. 17 o yes o no LANL Employees: Cost Center ____________ Program Code ____________ RETURN THIS FORM ---------------- All registration forms must be returned along with payment. Make your check payable in U.S. dollars to: CM User Group. Send your registration form and payment to: Los Alamos National Laboratory Protocol Office, MS P366 Attn.: FIN-3 Conference Accountant Los Alamos, NM 87545 Telephone No.: 505-667-6574 FAX No.: 505-667-7530 To facilitate further mailings, please return an email copy of the registration form to cm-users@acl.lanl.gov ------------------------------------------------------------------------------ HOTEL INFORMATION ---------------- A block of rooms has been reserved at the site of the meeting, the Eldorado Hotel, 309 W. San Francisco Street, Santa Fe, NM. Please call 1-800-955-4455 or 505-988-4455 for hotel reservations. Hotel rates are: Government*: $ 72.65 (single) $92.65 (double) Regular: $110.00 (single/double) * Limited availability for those who may qualify and can provide valid identification * State Universities may qualify for Government rates To obtain this special rate, please mention that you are with the CM User Group when placing your reservation. Please make your reservations by Friday, January 14, 1994. After this cut-off date the rooms will be booked subject to space and rate availability, so make your reservations as soon as possible. For further information please call Erma Pearson at 505-665-4530 or send email to cm-users@acl.lanl.gov -- Stephen C. Pope scp@acl.lanl.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: P.A.Nixon@sheffield.ac.uk (Paddy Nixon) Subject: USA PhD requirements Date: 17 Dec 1993 08:55:09 GMT Organization: Academic Computing Services, Sheffield University Nntp-Posting-Host: sunc.shef.ac.uk X-Newsreader: TIN [version 1.2 PL2] [check on parlib---there's a bunch of schools] I have a Student who is just completing his B.SC (Hons) degree in computing here in the U.K. He wants to start a PhD in the area of parallel computing, and he wants to undertake this in the states. I should add that he has been doing a final year project with Sun in Manchester on multi thread prgramming constructs/performance in Solaris2, so I believe he a suitable candidate. My questions are: 1. What are the general requirements for entry onto a graduate course. i.e. is a B.Sc. (hons) sufficient 2. How is the best way of finding the graduate schools and approaching them? 3. What are the funding possibilities? I would be very grateful for answers to the above and any other information that is deemed relevant. Thanks in advance Paddy Nixon Manchester Metropolitan University (paddy@sun.com.mmu.ac.uk) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Sender: news@unixfe.rl.ac.uk From: andrew@inf.rl.ac.uk (Andrew McDermott) Subject: WTC '94 CALL FOR PAPERS Date: 17 Dec 1993 12:07:26 GMT Organization: RAL, Chilton, Didcot, England Reply-To: sch@inf.rl.ac.uk Nntp-Posting-Host: cork.inf.rl.ac.uk WTC '94 CALL FOR PAPERS AND TUTORIAL PROPOSALS Villa Erba, Cernobbio, Lake Como, Italy 5 - 7 September, 1994 The Transputer Consortium (TTC) is pleased to announce that the WORLD TRANSPUTER CONGRESS 1994 (WTC '94) will be held on 5 - 7 September 1994 at the Villa Erba, Cernobbio, Lake Como, Italy . WTC '94 is the leading international transputer conference and exhibition and is the second in a series sponsored by and run under the overall management of TTC. SGS-Thomson is also sponsoring WTC '94. It is planned that each year WTC will be held in conjunction with a local partner. For the first, highly successful, WTC '93, the local partner was the German Transputer-Anwender-Treffen (TAT) conference. WTC '93, held at the Eurogress Conference Centre in Aachen, Germany, attracted 475 delegates from 32 countries worldwide. WTC '94 will be held in conjunction with the Italian Transputer User Group (ItTUG), which is due to be formed in early 1994. WTC '94 will incorporate the Inaugural Meeting of ItTUG. WTC '94 will be the first major conference where significant applications of the new T9000 transputer and its associated technologies (e.g. packet routers) will be extensively reported. OBJECTIVES * present `state-of-the-art' research on all aspects of parallel computing based upon communicating process architectures; * to demonstrate `state-of-the-art' products and applications from as wide a range of fields as possible; * to progress the establishment of international software and hardware standards for parallel computing systems; * to provide a forum for the free exchange of ideas, criticism and information from a world audience gathered from Industry, Commerce and Academia; * to promote an awareness of how transputer technologies may be applied and their advantages over other sequential and parallel processors; * to establish and encourage an understanding of the new software and hardware technologies enabled by the transputer, especially the new T9000 processor and C104 packet router from INMOS, the parallel DSP engines from Texas Instruments, and new products from Intel and other manufacturers. The conference themes will include: education and training issues, formal methods and security, performance and scalability, porting existing systems, parallelisation paradigms, tools, programming languages, support environments, standards and applications. Applications include: embedded real-time control systems, workstations, super-computing, consumer products, artificial intelligence, databases, modelling, design, data gathering and the testing of scientific or mathematical theories. BACKGROUND The World Transputer Congress (WTC) series was formed in 1992 from the merger of the TRANSPUTING series of conferences, organised by the worldwide occam and Transputer User Groups, and the TRANSPUTER APPLICATIONS series of conferences, organised by the UK SERC/DTI Transputer Initiative. WTC '93 attracted a large and enthusiastic audience from the majority of countries where transputer technology is accepted and/or parallel processing is seen as the key to meeting future computing demands. There is clearly a continuing, and growing, interest and commitment to this technology which will rely on the WTC series to maintain the vital information flow. It is reasonable to assume that it has already established itself as the leading conference in this important area. The successes of its predecessors has been a major factor in this. The continuing and vital support of TTC and the large number of User Groups from around the world will ensure a continuing success story for WTC. FORMAT The format adopted for WTC '93 will be continued at WTC '94. There will be a mix of Plenary Sessions, with Keynote and Invited Speakers from around the world, and Parallel Sessions, one of which will be organised by ItTUG. The exact number of Parallel Streams will be dependent on the quality of papers submitted against this Call for Papers. LOCATION WTC '94 will be held at the Villa Erba Conference and Exhibition Centre, Cernobbio, Lake Como, Italy. Cernobbio is 4KM from Como. The modern complex offers unique conference and exhibition facilities providing a main conference hall, meeting rooms and reception halls together with an exhibition area which can be divided into a maximum of 280 stands. It is set in the beautiful landscaped grounds of the Villa Erba on the shores of the lake. The Mannerist style Villa, with its steps down to the lake, was built in 1892 and is of both historic and artistic importance. ACCOMMODATION A range of hotel accommodation (2*, 3* and 4*) has been reserved for WTC '94 in Cernobbio and Como. The majority of these hotels are within easy walking distance of the Villa Erba. However there is a limit to the total number of rooms available in the town, so early booking is recommended. Details will be sent, as soon as they are available, to all people who register their interest in WTC '94 by returning the reply slip at the end of this announcement. GETTING THERE Como has excellent air, rail and road access, being within easy reach of two international airports, the main motorways and the trans-European rail networks. The two International Airports are Milan (Linate) and Lugano (Agno). Although many more international flights arrive at Milan, special arrangements are being negotiated with Crossair for flights to and from Lugano. Crossair flights connect with international flights at many major European airports. Travelling times by road to Como are 20 minutes from Milan and 15 minutes from Lugano. Buses will be provided for delegates, serving both airports. There is a frequent rail service from Milan to Como and regular buses from Como to Cernobbio. Fuller details will be sent, as soon as they are available, to people who register their interest in WTC '94. EXHIBITION An associated exhibition attracting the world's leading suppliers of transputer-based and other relevant hardware, software and application products will be held at the Villa Erba Exhibition Centre. The WTC '93 Exhibition was viewed as a great success by both exhibitors and participants alike and attracted a large number of visitors. Companies and other organisations wishing to exhibit at the WORLD TRANSPUTER CONGRESS 1994 should contact one of the Committee members listed at the end of this announcement. Opportunities will also exist for posters and demonstrations of academic achievements. CALL FOR PAPERS The conference programme will contain invited papers from established international authorities together with submitted papers. The International Programme Committee, presided over by Ing. A De Gloria (University of Genoa), Dr S C Hilton (TTC), Dr M R Jane (TTC), Dr D Marini (University of Milan) and Professor P H Welch (WoTUG), is now soliciting papers on all areas described above. All papers will be fully refereed in their final form. Only papers of high excellence will be accepted. The proceedings of this conference will be published internationally by IOS Press and will be issued to delegates as they register at the meeting. BEST PAPER AWARD The award for the best paper (worth approximately #500) will be based on both the submitted full paper for refereeing and the actual presentation at the Conference. Members of the Programme Committee will be the judges and their decision will be final. The winner will be announced and the presentation made in the final Closing Session on Wednesday, 7 September. PROGRAMME COMMITTEE MEMBERS The Programme Committee consists of invited experts from Industry and Academia, together with existing members from the joint organising user- groups based in Australia, France, Germany, Hungary, India, Italy, Japan, Latin America, New Zealand, North America, Scandinavia and the United Kingdom. The refereeing will be spread around the world to ensure that all points of view and expertise are properly represented and to obtain the highest standards of excellence. INSTRUCTIONS TO AUTHORS Four copies of submitted papers (not exceeding 16 pages, single-spaced, A4 or US 'letter') must reach the Committee member on the contact list below who is closest to you by 1 March 1994. Authors will be notified of acceptance by 24 May 1994. Camera-ready copy must be delivered by 23 June 1994, to ensure inclusion in the proceedings. A submitted paper should be a draft version of the final camera-ready copy. It should contain most of the information, qualitative and quantitative, that will appear in the final paper - i.e. it should not be just an extended abstract. CALL FOR TUTORIALS AND WORKSHOPS Before the World Transputer Congress 1994, we shall be holding tutorials on the fundamental principles underlying transputer technologies, the design paradigms for exploiting them, and workshops that will focus directly on a range of specialist themes (e.g. real-time issues, formal methods, AI, image processing ..). The tutorials will be held on 3 - 4 September 1994 in the Villa Erba itself. We welcome suggestions from the community of particular themes that should be chosen for these tutorials and workshops. In particular, we welcome proposals from any group that wishes to run such a tutorial or workshop. A submission should outline the aims and objectives of the tutorial, give details of the proposed programme, anticipated numbers of participants attending (minimum and maximum) and equipment (if any) needed for support. Please submit your suggestions or proposals to one of the Committee members listed below by 1 March 1994. DELIVERY AND CONTACT POINTS Dr Mike Jane, The Transputer Consortium, Informatics Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, UK Phone: +44 235 445408; Fax: +44 235 445893; email: mrj@inf.rl.ac.uk Dr Daniele Marini, Department of Computer Science, University of Milan, Via Comelico,39, Milan 20135, ITALY. Phone: +39 2 5500 6358; Fax: +39 2 5500 6334 email:marini@imiucca.csi.unimi.it Mr David Fielding, Chair, NATUG, Cornell Information Technologies, 502 Olin Library, Cornell University, Ithaca NY 14853, USA Phone: +1 607 255 9098; Fax: +1 607 255 9346 email: fielding@library.cornell.edu Dr Kuninobu Tanno, Department of Electrical and Information Engineering, Yamagata University, Yonezawa, Yamagata 992, JAPAN Phone: +81 238 22 5181; Fax: +81 238 26 2082 email: tanno@eie.yamagata-u.ac.jp Mr John Hulskamp, Department of Computer Systems Engineering, RMIT, G.P.O. Box 2476V, Melbourne, 3001 AUSTRALIA Phone: +61 3 660 5310 ; Fax: +61 3 660 5340; email: jph@rmit.edu.au Dr Rafael Lins, Chair, OUG-LA, Department de Informatica, UFPE - CCEN, Cidade Universitaria, Recife - 50739 PE BRAZIL Phone: +55 81 2718430; Fax: +55 81 2710359; email: rdl@di.ufpe.br FOR FURTHER INFORMATION PLEASE CONTACT: Dr Susan C Hilton Building R1 Rutherford Appleton Laboratory CHILTON, DIDCOT, OXON. OX11 0QX UK Phone: +44 235 446154 Fax: +44 235 445893 email sch@inf.rl.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: muenkel@daedalus.tnt.uni-hannover.de (Heiko Muenkel) Subject: Searching for Papers Sender: news@newsserver.rrzn.uni-hannover.de (News Service) Organization: Universitaet Hannover, Theoretische Nachrichtentechnik Date: Fri, 17 Dec 1993 05:09:33 GMT Apparently-To: hypercube@hubcap.clemson.edu Hello, I'm searching references on papers related to "Recognition of traffic signs", written in english or german. Please reply by email to: muenkel@tnt.uni-hannover.de I will post a summary, if there is any interest. Thanks in advance, Heiko -- Dipl.-Ing. Heiko Muenkel Universitaet Hannover Institut fuer Theoretische Nachrichtentechnik Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: trachos@uni-paderborn.de (Konstantin Trachos) Subject: Re: References about Topic "hot spot" ? Date: 17 Dec 1993 14:29:30 GMT Organization: Uni-GH Paderborn, Germany References: <1993Dec1.163400.17011@hubcap.clemson.edu> Nntp-Posting-Host: sunflower.uni-paderborn.de I would like to express my special thanks to both turner@csrd.uiuc.edu and zhang@ringer.cs.utsa.edu for pointing me to papers covering the hot spot problem. This is what I got: {ZYC93, author = "Xiaodong Zhang, Yong Yan, Robert Castaneda", title = "Comparative Cerformance Analysis and Evaluation of Hot Spots on Network-based Shared-memory Architectures", institution = "The University of Texas at San Antonio" year = 1993 address = zhang@ringer.cs.utsa.edu } @TechReport{LSC92, author = "Jyh-Charn Liu and K. G. Shin and Charles C. Chang", title = "Prevention of Hot Spots in Packet-Switched Multistage Interconnection Networks", institution = "Texas A\&M University", year = 1992, number = "TAMU 92-016", address = "College Station, Texas 77843-3112", month = "July" } @Article{YeTL87, author = "Pen-Chung Yew and Nian-Feng Tzeng and Duncan H. Lawrie", title = "Distributing Hot Spot Addressing in Large Scale Multiprocessors", journal = TOC, year = "1987", volume = "C-36", number = "4", pages = "388-395", month = apr, OPTnote = "Why hot spots need not be a problem" } @InProceedings{LeKK86, author = "Gyungho Lee and Clyde P. Kruskal and David J. Kuck", title = "The Effectiveness of Combining in Shared Memory Parallel Computers in the Presence of ``Hot Spots''", booktitle = ICPP, year = "1986", pages = "35--41", OPTnote = "" } @InProceedings{KuPf86, author = "Manoj Kumar and Gregory F. Pfister", title = "The Onset of Hot Spot Contention", booktitle = ICPP, year = "1986", pages = "28--34", OPTnote = "" } @InProceedings{PfNo85, author = "G. F. Pfister and V. A. Norton", title = "{`Hot Spot'} Contention and Combining in Multistage Interconnection Networks", pages = "790--797", booktitle = "Proceedings of the International Conference on Parallel Processing", year = 1985 } @InProceedings{RPPP85, author = "Gregory F. Pfister and W. C. Brantley and D. A. George and S. L. Harvey and W. J. Kleinfeider and K. P. McAuliffe and E. A. Melton and V. A. Norton and J. Weiss", title = "The {IBM Research Parallel Processor Prototype (RP3)}: Introduction and Architecture", booktitle = ICPP, year = "1985", pages = "764-771", month = Aug, OPTnote = "First coined term ``hot-spot''" } -- Konstantin Trachos email: trachos@dat.uni-paderborn.de -------------------------------------------------------------------------------- <> Jules Lemantre (1853 - 1914) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: wakatani@cse.ogi.edu (Akiyoshi Wakatani) Subject: Wanted [TeX style file for Parallel Computing] Message-ID: <63016@ogicse.ogi.edu> Date: 17 Dec 93 18:10:50 GMT Article-I.D.: ogicse.63016 Posted: Fri Dec 17 10:10:50 1993 Sender: news@ogicse.ogi.edu Followup-To: comp.parallel Hi. I'm looking for the TeX style file designed for Parallel Computing, which is a magazine published in Netherland. I would appreciate it if anyone give me information about that. Akiyoshi Wakatani Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ugur Halici Subject: TAINN III : CALL FOR PAPERS CALL FOR PAPERS --------- TAINN III --------- The Third Turkish Symposium on ----------------------------------------- ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS ----------------------------------------- June 22-24, 1994, METU, Ankara, Turkey Organized by Middle East Technical University & Bilkent University in cooperation with Bogazici University, TUBITAK INNS Turkey SIG, IEEE Computer Society Turkey Chapter, ACM SIGART Turkey Chapter, Conference Chair: Nese Yalabik (METU), nese@vm.cc.metu.edu.tr Program Committee Co-chairs: Cem Bozsahin (METU), bozsahin@vm.cc.metu.edu.tr Ugur Halici (METU), halici@vm.cc.metu.edu.tr Kemal Oflazer (Bilkent), ko@cs.bilkent.edu.tr Organization Committee Chair: Gokturk Ucoluk (METU) , ucoluk@vm.cc.metu.edu.tr Program Comittee: L. Akin (Bosphorus), V. Akman (Bilkent), E. Alpaydin (Bosphorus), S.I. Amari (Tokyo), I. Aybay (METU), B. Buckles (Tulane), G. CARPENTER (BOSTON), I. CICEKLI (BILKENT), C. DAGLI (MISSOURY-ROLLA), D.Davenport (Bilkent), G. Ernst (Case Western), A. Erkmen (METU) N. Findler (Arizona State), E. Gelenbe (Duke), M. Guler (METU), A. Guvenir (Bilkent), S. Kocabas (TUBITAK), R. Korf (UCLA), S. Kuru (Bosphorus), D. Levine (Texas Arlington), R. Lippmann (MIT), K. Narendra (Yale), H. Ogmen (Houston), U. Sengupta (Arizona State), R. Parikh (CUNY), F. Petry (Tulane), C. Say (Bosphorus), A. Yazici (METU), G. Ucoluk (METU), P. Werbos (NSF), N. Yalabik (METU), L. Zadeh (California), W. Zadrozny (IBM TJ Watson) Organization Committee: A. GULOKSUZ, O. IZMIRLI, E. ERSAHIN, I. OZTURK, C. TURHAN Scope of the Symposium * Commonsense Reasoning * Expert Systems * Knowledge Representation * Natural Language Processing * AI Programming Environments and Tools * Automated Deduction * Computer Vision * Speech Recognition * Control and Planning * Machine Learning and Knowledge Acquisition * Robotics * Social, Legal, Ethical Issues * Distributed AI * Intelligent Tutoring Systems * Search * Cognitive Models * Parallel and Distributed Processing * Genetic Algorithms * NN Applications * NN Simulation Environments * Fuzzy Logic * Novel NN Models * Theoretical Aspects of NN * Pattern Recognition * Other Related Topics on AI and NN Paper Submission: Submit five copies of full papers (in English or Turkish) limited to 10 pages by January 31, 1994 to : TAINN III, Cem Bozsahin Department of Computer Engineering Middle East Technical University, 06531, Ankara, Turkey Authors will be notified of acceptance by April 1, 1994. Accepted papers will be published in the symposium proceedings. The conference will be held on the campus of Middle East Technical University (METU) in Ankara, Turkey. A limited number of free lodging facilities will be provided on campus for student participants. If there is sufficient interest, sightseeing tours to the nearby Cappadocia region known for its mystical underground cities and fairy chimneys, to the archaeological remains at Alacahoyuk , the capital of the Hittite empire, and to local museums will be organized. For further information and announcements contact: TAINN, Ugur Halici Department of Electrical Engineering Middle East Technical University 06531, Ankara, Turkey EMAIL: TAINN@VM.CC.METU.EDU.TR (AFTER JANUARY 1994) HALICI@VM.CC.METU.EDU.TR (BEFORE) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hunt@shy.umd.edu (Brian R. Hunt) Subject: info wanted on IBM SP1 Date: 18 Dec 93 21:09:26 GMT Organization: University of Maryland, College Park Nntp-Posting-Host: shy.umd.edu Hi folks, I've been charged with gathering information ASAP on the IBM 9076 Scalable PowerParallel (SP1) system. If anyone out there is using one, I'd be interested in hearing your overall impression of it. In addition, I have a couple questions. (I will address them to the marketing people on Monday, but figure some outside opinions could be helpful as well.) Is, as I presume, the SP1 based on the POWER chip, and not the POWER2 or POWERPC? Can anyone speculate as to whether and when it will be upgradable to one of the latter chips, how much money might be involved, etc.? Is there anything special I should know about the SP1 with regard to displaying graphics? Any special features in the hardware setup, or any graphical applications for which it offers a significant speed-up? Many thanks if you can help, -- Brian R. Hunt hunt@ipst.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jan@neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) Subject: Re: Status of Suprenum Project Date: 19 Dec 93 20:34:31 GMT Organization: Institut fuer Neuroinformatik, Ruhr-Universitaet Bochum, Germany References: <1993Dec15.164542.10874@hubcap.clemson.edu> They cancelled the project about two years or so ago after --- finally! --- producing the first working hardware, years late. (If I remember right, they were still using a 68020 as node controller when the 68040 was out...) The main reason was that the hardware was no longer competitive. The whole thing cost (main the German taxpayer) on the order of 150 MDM...think what you could do with that amount of money! The software is owned and being actively promoted (and, presumably, advanced) by a company called Pallas GmbH in Bonn. Jan Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: marc@access.digex.net (Marc E. Cotnoir) Subject: Codes ported for message passing??? Organization: Express Access Online Communications, Greenbelt, MD USA There has been much discussion about the implementation and comparative performance of message passing systems such as CHAMELEON, p4, PVM, PARMACS and (soon-to-be) MPI, for the development of parallel programs on multi-processor systems and clustered workstations. But how many real codes are being or have been of ported to these environments? I am thinking specifically of widely used codes for applications such as finite element, molecular modelling, cfd and such. If you are currently involved in such porting work, or know of centers where these activities are on-going, I'd like to receive this information. Please forward details regarding what codes are being (or have been) ported, which message passing system is used and the sites or contact details where this work is going on (if known). Please email to the address below and if the response is high I will summarize to this newsgroup. Thanks in advance for your help. Regards Marc marc@access.digex.net Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Marie-Christine Sawley Subject: Position offered at EPFL/Switzerland Organization: Service Informatique Central Nntp-Posting-Host: samac9.epfl.ch X-Useragent: Nuntius v1.1.2 X-Xxdate: Mon, 20 Dec 93 12:38:36 GMT SCIENTIST/ PROGRAMMER IN PARALLEL COMPUTING Swiss Federal Institute of Technology of Lausanne (EPFL) The EPFL has a vacancy within the framework of the Parallel Application Technology Program (PATP) established in collaboration with CRAY RESEARCH Inc. This program is aimed at the development of scientific applications for the massively parallel computer, the Cray T3D. We are presently looking for a senior scientist / programmer having a solid background in scientific simulation and an excellent knowledge of Fortran, C and UNIX, to join the Program Support Team. The position requires a strong interest in taking up the unique challenge of contributing to the expansion of parallel computing technology at the EPFL, good communication skills and the ability to work in a multidisciplinary group. An experience on parallel architectures and programming methodologies would be highly desirable. The position is available from 1 January 1994 for a period of 12 months, with possible renewals. The level of appointment will depend on the qualifications of the applicant. Applications, including a curriculum vitae and the names of references, should be sent before 31 January 1993 to Dr. Marie-Christine Sawley, SIC, MA-Ecublens, CH-1015 Lausanne, Switzerland. (For further information Tel: +41 21 693 2242; Fax: +41 21 693 2220; e-mail: sawley@sic.epfl.ch.) The EPFL is comprised of 11 departments actively involved in both research and teaching with a total of 4200 students, 140 professors, 1400 research staff, as well as additional administrative and technical staff. Excellent computer resources are available, consisting of central supercomputers (presently a Cray Y-MP M94 and a Cray Y-MP EL file server), as well as the computational resources (e.g., high and low level workstations) of the various departments and institutes. The EPFL will install in April 1994 a Cray T3D system with 128 processors (to be upgraded to 256). The EPFL is the sole European PATP site; its activities will be coordinated with the three corresponding American programs at Pittsburgh Supercomputing Center, JPL/Caltech, and LLNL/LANL. Dr. M.-C. SAWLEY Section assistance Service Informatique Central MA-ECUBLENS 1015-LAUSANNE (CH) email: sawley@sic.epfl.ch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: am@cheops.ief-paris-sud.fr (Alain Merigot) Subject: Re: ?Hyper-C (Re: Data-parallel languages (non-CM C*)?) Date: 20 Dec 1993 15:09:36 GMT Organization: IEF - Universite Paris Sud, Orsay, France References: <1993Dec7.155305.4663@hubcap.clemson.edu> <1993Dec9.142826.1645@hubcap.clemson.edu> <1993Dec14.140651.21226@hubcap.clemson.edu> Nntp-Posting-Host: cheops.ief-paris-sud.fr X-Newsreader: TIN [version 1.2 PL2] thomas faerbinger (thomas@wein12.elektro.uni-wuppertal.de) wrote: : In article <1993Dec9.142826.1645@hubcap.clemson.edu> : cary@esl.com (Cary Jamison) wrote: : |> In article <1993Dec7.155305.4663@hubcap.clemson.edu>, richards@wrl.EPI.COM : |> (Fred Richards) wrote: : |> > : |> > Is any data-parallel language emerging as a standard, : |> > much as PVM seems to be as a message-passing library? : |> > : |> > Does C*, or something *very* similar, run on any of the : |> > other MPP machines (Intel, nCube, MasPar, etc.) : |> : |> Can't say that it's an emerging standard, but HyperC seems promising. It : |> is running on workstation clusters (usually built on PVM), CM, MasPar, and : |> is being ported to others such as nCube. : Is (the PVM-version of) Hyper-C ftp-able somewhere? ( where? ) : I'm fumbling around with PVM ( on a small workstation-cluster running ULTRIX ), : CMMD and C* (on a CM5) and would like to complete this choice for some kind : of comparison. Hyper-C is a *commercial* language from HyperParallel Technology. They can be contacted at hyperc-support@hyperparallel.polytechnique.fr. Presently, there is only a support on a workstation and a new MPV version. This version can be used with parallel machines that support MPV, but there is currently no optimized version for any parallel machine. I am using Hyperc-C on a workstation for 9 monthes now, and I am quite satisfied by the language, that, to my opinion, is much better than C*. Can't tell about the MPV version, that I have not already experimented. Hope this helps. Alain Merigot -- || Alain Merigot /------------------------------\ || || Institut d'Electronique Fondamentale | Tel : 33 (1) 69 41 65 72 | || || Universite Paris Sud | Email : am@ief-paris-sud.fr| || || 91405 Orsay Cedex | Fax : 33 (1) 60 19 25 93 | || Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: steven.parker@acadiau.ca (Steven E. Parker) Subject: Looking for anon ftp site at Syracuse University Sender: news@relay.acadiau.ca Nntp-Posting-Host: dragon.acadiau.ca Organization: Acadia University Date: Mon, 20 Dec 1993 16:47:22 GMT Apparently-To: uunet!comp-parallel I am looking for the anon ftp site for the Northeast Parallel Architectures Center at Syracuse University. I am trying to locate some technical reports which were cited in a paper. Regards, -- Steven Parker. (steven.parker@acadiau.ca) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pleierc@Informatik.TU-Muenchen.DE (Christoph Pleier) Subject: Distributed C Development Environment now available for LINUX and UNICOS Keywords: distributed programming, parallel programming, Distributed C Originator: pleierc@hpeick9.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Date: Mon, 20 Dec 1993 14:30:25 +0100 Distributed C Development Environment now available for LINUX and UNICOS The Distributed C Development Environment is now available for AT 386/486 running LINUX and for Cray supercomputers running UNICOS. The environment is stored at ftp.informatik.tu-muenchen.de in the directory /local/lehrstuhl/eickel/Distributed_C. You can get it by anonymous ftp. ----- The Distributed C Development Environment was developed at Technische Universitaet Muenchen, Germany, at the chair of Prof. Dr. J. Eickel and is a collection of tools for parallel and distributed programming on single- processor-, multiprocessor- and distributed-UNIX-systems, especially on heterogenous networks of UNIX computers. The environment's main purpose is to support and to simplify the development of distributed applications on UNIX networks. It consists of a compiler for a distributed programming language, called Distributed C, a runtime library and several useful tools. The programming model is based on explicit concurrency specification in the programming language DISTRIBUTED C, which is an extension of standard C. The language constructs were mainly taken from the language CONCURRENT C developed by N. Gehani and W. D. Roome and are based on the concepts for parallel programming implemented in the language ADA. Distributed C makes possible the common programming in C together with the user-friendly programming of process management, i. e. the specification, creation, synchronization, communication and termination of concurrently executed processes. The Distributed C Development Environment supports and simplifies the dis- tributed programming in several ways: o Development time is reduced by checking Distributed C programs for errors during compilation. Because of that, errors within communication or synchronization actions can be easier detected and avoided. o Programming is simplified by allowing the use of simple pointer types even on loosely-coupled systems. This is perhaps the most powerful feature of Distributed C. In this way, dynamic structures like chained lists or trees can be passed between processes elegantly and easily - even in heterogeneous networks. Only the anchor of a dynamic structure must be passed to another process. The runtime system automatically allocates heap space and copies the complete structure. o Developement is user-friendly by supporting the generation and installation of the executable files. A special concept was developed for performing the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. o Programming difficulty is reduced by software-aided allocating processes at runtime. Only the system administrator needs to have special knowledge about the target system's hardware. The user can apply tools to map the processes of a Distributed C program to the hosts of a concrete target system. o Execution time is reduced by allocating processes to nodes of a network with a static load balancing strategy. o Programming is simplified because singleprocessor-, multiprocessor- and distributed-UNIX-systems, especially homogeneous and heterogeneous UNIX- networks can be programmed fully transparently in Distributed C. The Distributed C Development Environment consists mainly of the tools: o Distributed C compiler (dcc): compiles Distributed C to standard C. o Distributed C runtime library (dcc.a): contains routines for process creation, synchonization, ... o Distributed C administration process (dcadmin): realizes special runtime features. o Distributed C installer program (dcinstall): performes the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. The environment runs on the following systems: o Sun SPARCstations (SunOS), o Hewlett Packard workstations (HP/UX), o IBM workstations (AIX), o IBM ATs (SCO XENIX, SCO UNIX, LINUX), o Convex supercomputers (ConvexOS), o Cray supercomputers (Unicos), o homogeneous and heterogeneous networks of the systems as mentioned above. Moreover the implementation was designed for the use on Intel iPSC/2s. The Distributed C Development Environment source code is provided "as is" as free software and distributed in the hope that it will be useful, but without warranty of any kind. -- Christoph Pleier pleierc@informatik.tu-muenchen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 20 Dec 93 11:30:49 CST Subject: VIRTUAL REALITY IN MEDICINE From: Karen Morgan <70530.1227@CompuServe.COM> Medicine Meets Virtual Reality II: INTERACTIVE TECHNOLOGY AND HEALTHCARE: VISIONARY APPLICATIONS FOR SIMULATION, VISUALIZATION, ROBOTICS January 27-30, 1994, San Diego Marriott Hotel & Marina Sponsored by UCSD, 23 hours Category 1 CME credit, $390 until December 31, $450 after, call 619/751-8841, fax 751-8842, or E-mail 70530,1227@compuserve.com for information THURSDAY, January 27, three workshops offered simultaneously: I. BIOLOGICAL INFORMATICS, Hans B. Sieburg, Ph.D., Chair, Participants: Sheldon Ball, Floyd Bloom, Michael Huerta, Ralph Martinez, Jack Park, Stella Veretnik, W. Ziarko II. MASSIVELY PARALLEL PROCESSING COMPUTERS FOR MEDICAL TECHNOLOGY DEVELOPMENT Makoto Nonaka, M.D., Ph.D., Chair, Participants: Adrian King, Patrick Chang, Michael Gribskov, Russ Altman, Tom Brotherton III.INTERACTIVE TECHNOLOGIES IN HEALTHCARE: THE "BIG PICTURE" Dave Warner, Chair FRIDAY, January 28 TECHNOLOGY ASSESSMENT: Who Will Pay and Why? Diane S. Millman, J.D., Paul Radensky, M.D., J.D., John E. Abele, Steven T. Charles, M.D., Mark Wiederhold, M.D., Ph.D., Faina Shtern, M.D., Melvyn Greberman, M.D., MPH DATA FUSION: More Than the Sum of the Parts. Don Stredney, Hans B. Sieburg, Ph.D., Mark Wiederhold, M.D., Ph.D. APPLICATIONS: New Visions for New Technologies. Col. Richard M. Satava, M.D., Joseph M. Rosen, M.D., Harvey Eisenberg, M.D., Michael D. Doyle, Ph.D., Walter J. Greenleaf, Ph.D., John P. Brennan, M.D., Kenneth Kaplan, Beth A. Marcus, Ph.D., Suzanne Weghorst, Christopher C. Gallen, M.D., Ph.D. SURGERY: Images of the New Paradigm. Glenn M. Preminger, M.D., John Flynn, Adrie C.M. Dumay, Ph.D., David Hon, Jonathan R. Merril, M.D., Zoltan Szabo, Ph.D., Michael Truppe, M.D., Patrick J. Kelly, M.D., Robert B. Lufkin, M.D., Leon Kaufman, Ph.D., Karun Shimoga, Ph.D., William E. Lorensen, Volker Urban, M.D., P. Mayer, N. M. Huewel, M.D. SATURDAY, January 29 EDUCATION AND TRAINING: The Best and Highest Use. J.K. Udupa, Ph.D., Richard A. Robb, Ph.D., Jonathan Prince, D.D.S., Helene M. Hoffman, Ph.D., Michael J. Ackerman, Ph.D. INTERFACE: Speaking the Same Language. Nathaniel I. Durlach, Dave Warner, Col. Richard M. Satava, M.D., Myron Krueger, Ph.D., Walter J. Greenleaf, Ph.D., Paul Cutt, Narender P. Reddy, Ph.D., Scott Hassan, Alan Barnum-Scrivener, John Peifer, M.A. TELEROBOTICS: Reach Out and Touch Something. Ian Hunter, Ph.D., Paul S. Schenker, Ph.D., Elmar H. Holler, Steven T. Charles, M.D., Bela L. Musits, Hugh Lusted, Ph.D., Janez Funda, Ph.D., Yulun Wang, Ph.D. SUBMITTED PAPERS: Gabriele Faulkner, Ph.D., Uwe G. Kuehnapfel, Ph.D., Matthias Wapler, R. Bowen Loftin, Ph.D., Jaren Parikh, Kurt R. Smith, D.Sc., Bruce Kall, M.S., Donald W. Kormos, Ph.D., David W. Cloyd, M.D., Penny Jennett, Ph.D., Lauren Gabelman, M.S., Joshua Lateiner, Anthony M. DiGioia III, M.D., Joseph B. Petelin, M.D., Timothy Poston, Erik Viirre, M.D., Ph.D., Mark Bolas, A. David Johnson, Ph.D., Brian D. Athey, Ph.D. SUNDAY, January 30 TELEMEDICINE: The Global Health Community. Dave Warner, Michael F. Burrow, Jay H. Sanders, M.D., Ralph Martinez, Ph.D., William J. Dallas, Ph.D., John D. Hestenes, Ph.D., Rudy Mattheus, M.Sc., Georges J.E. De Moor, M.D., Jens P. Christensen, M.SE., MBA SUMMARY DISCUSSION: Improving Quality, Continuity, and Access to Healthcare While Reducing Cost. Faina Shtern, M.D., Col. Richard M. Satava, M.D., Makoto Nonaka, M.D., Ph.D., Nathaniel I. Durlach, John D. Hestenes, Ph.D., Rudy Mattheus, M.Sc., Melvyn Greberman, M.D., MPH EXHIBITS: Advanced Visual Systems, Inc., Artma Biomedical, Inc., BioControl Systems, Inc., Computer Motion, Inc., Dimension Technologies, Inc., Engineering Animation, Inc., XTensory, Inc., High Techsplanations, Inc., Image Technology Associates, Inc., Immersion Corp., IVI Publishing, IXION, Kaiser Medical Optics, Inc., Shooting Star Technology, Silicon Graphics, Inc., SONY Medical Systems, Inc., Stealth Technologies, Inc., Pixys, Inc., Virtual Vision Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ara@zurich.ai.mit.edu (Allan Adler) Subject: Free Simulator Wanted Date: 20 Dec 93 14:30:38 Organization: M.I.T. Artificial Intelligence Lab. Nntp-Posting-Host: camelot.ai.mit.edu I would like to explore the development of parallel code using a simulator for a parallel processor. I know that things like that exist, e.g. there was a simulator for the Connection Machine running on a Symbolics Lisp Machine. Me, I would just like a simulator that runs on a UNIX system with one processor. I can't afford to purchase software, so I need for it to be free. Does anyone know where I can get that? How about a compiler which is targeted to it? Allan Adler ara@altdorf.ai.mit.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Mon, 20 Dec 1993 19:31:26 GMT From: kale@cs.uiuc.edu Subject: Re: asynchronous IO (message passing) Sender: news@cs.uiuc.edu Organization: Dept. of Computer Sci - University of Illinois References: <1993Dec15.144124.27510@hubcap.clemson.edu> > You should carefully distinguish between > > 1) asynchronous (or non-blocking) message passing, in which you can start > a message passing op (send or receive) and then later test or wait > for its completion > > and > > 2) interrupt or signal driven message passing (probably only useful for a > receive !) in which receipt of a particular message forces execution > of a routine asynchronously to the execution of the users code. > > MPI fully supports 1, and provides no support for 2. There is a third way: message driven execution. In a message driven system, such as Charm, there may be one or many small processes (or objects, if you like OO terminology) per processor. Each process defines several entry-functions. Each message is directed to a specific entry-function of a particular process. The Charm runtime system (using a possibly user selected scheduling strategy) repeatedly picks a message, and schedules the appropriate process to execute. Message driven execution leads to substantial performance gains for many applications. It gives one the ability to *adaptively* overlap communication and computation. It doesn't have the "ugliness" and other problems of (2) (interrupt driven execution) Jim mentions. Moreover, it leads to much modular code than (1) - testing or probing for message arrivals. I would like to elaborate these points in another posting; but thought I should at least respond briefly right away. --- L.V. Kale kale@cs.uiuc.edu p.s. More information on Charm: anon. ftp: a.cs.uiuc.edu, in pub/CHARM. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: furnari@sp61.csrd.uiuc.edu (Mario Furnari) Subject: Last Call For Paper for MP94 Message-ID: <1993Dec21.003652.15140@csrd.uiuc.edu> Summary: Worksho on Massi Parallel Programming and APplications Keywords: Massive Paralellism Sender: mf@arco.na.cnr.it Organization: Istituto di Cibernetica, Arco Felice Italy $2^{nd}$ International Workshop on Massive Parallelism: Hardware, Software and Applications October 3-7 1994 Capri - Italy Organized by: Istituto di Cibernetica (Naples, Italy) in cooperation with Department of Computer Architecture (Barcelona, Spain) Department of Computer Science (Patras, Grece) Center for Supercomputing Research & Development (Urbana Champagin, U.S.A) The $2^{nd}$ International Workshop on Massive Parallelism: Hardware, Software, and Application is sponsored by the "Progetto Finalizzato Calcolo Parallelo e Sistemi Informativi" which was established by the Italian "Consiglio Nazionale delle Ricerche" to advance knowledge in all areas of parallel processing and related technologies. In addition to technical sessions of submitted paper presentations, MP '94 will offer tutorials, a parallel systems fair, and commercial exhibits. Call For Papers: Authors are invited to submit manuscripts that demonstrate original unpublished research in all areas of parallel processing including development of experimental or commercial systems. Topics of interest include but are not limited to: Parallel Algorithms Parallel Architectures Parallel Languages Programming Environments Parallelizing Compilers Performance Modeling/Evaluation Signal & Image Processing Systems Other Applications areas To submit an original research paper, send five (hard) copies of your complete manuscript (not to exceed 15 single-spaced pages of text using point size 12 type on 8 1/2 X 11 inch pages) to the Program Chair. References, figures, tables, etc. may be included in addition to the fifteen pages of text. Please include your postal address, e-mail address, telephone and fax numbers. All manuscripts will be reviewed. Manuscripts must be received by "January 15, 1994". Submissions received after the due date or exceeding the length limit may be returned and not considered. Notification of review decisions will be mailed by "April 31, 1994". Camera-ready papers are due "May 31, 1994". Proceedings will be available at the Symposium. Electronic submissions will be considered only if they are in LaTeX or MS-Word 5 for Macintosh. Tutorials: Proposals are solicited for organizing full or half-day tutorials to be held on during the Symposium. Interested individuals should submit a proposal by January 15, 1994 to the Tutorials Chair. It should include a brief description of the intended audience, a lecture outline and vita lecturer(s). Parallel Systems Fair: This all day event will include presentations by researchers who have parallel machines under development, as well as by representatives of companies with products of interest to the Massively Parallel Processing community. A presentation summary should be submitted to the Parallel Systems Chair by January 15, 1994. MP '94 Organization: Workshop Program Committee: Arvind (USA) E. Ayguade (Spain) R. Bisiani (Italy) A. Fukuda (Japan) W. Jalby (France) R. Halstead (U.S.A.) J. Labarta (Spain) M. Mango Furnari (Italy) A. Nicolau (USA) D. Padua (USA) R. Perrot (U.K.) C. Polychronopoulos (USA) T. Papatheodoru (Greece) B. Smith (U.S.A.) M. Valero (Spain) R. Vaccaro (Italy) E. Zapata (Spain) Organizing Committee: M. Mango Furnari (Italy) T. Papatheodoru (Greece) R. Napolitano (Italy) C. Di Napoli(Italy) E. Ayguade (Spain) D. Padua (U.S.A.) Tutorials Chair: Prof. C. Polychronopoulos CSRD, University of Illinois 1308 West Main St. Urbana Champaign IL 61801-2307 U.S.A. Ph.: (+1) (217) 244-4144 Fax: (+1) (217) 244-1351 Internet: cdp@csrd.uiuc.edu Parallel Systems Fair Chair: Prof. A. Massarotti Istituto di Cibernetica Via Toiano, 6 I-80072 - Arco Felice (Naples) Italy Phone: +39-81-853-4126 Fax: +39-81-526-7654 E-mail: massarotti@cib.na.cnr.it Symposium Chair: Mario Mango Furnari Istituto di Cibernetica Via Toiano, 6 I-80072 - Arco Felice (Naples, Italy) Phone: +39-81-853-4229 Fax: +39-81-526-7654 E-mail: furnari@cib.na.cnr.it Secretariat: A. Mazzarella, C. Di Napoli Istituto di Cibernetica Via Toiano, 6 I-80072 - Arco Felice (Naples, Italy) Phone: +39-81-853-4123 Fax: +39-81-526-7654 E-mail: secyann@cib.na.cnr.it IMPORTANT DATES: Paper Submission Deadline: January 15, 1994 Tutorial proposals due: January 15, 1994 Systems fair presentation due: January 15, 1994 Acceptance letter sent: April 30, 1994 Camera ready copies due: May 31, 1994 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jeff@is.s.u-tokyo.ac.jp (Jeff McAffer) Subject: Request for Data Reply-To: jeff@is.s.u-tokyo.ac.jp Organization: University of Tokyo / Object Technology International Sender: news@tja.is.s.u-tokyo.ac.jp (Usenet News System) Date: Tue, 21 Dec 1993 07:21:39 GMT X-Bytes: 1145 Apparently-To: comp-parallel@UUNET.UU.NET I am working on a set of tools for analysing the behaviour of concurrent (parallel or distributed) systems but am sadly lacking in data/applications to analyze. I am wondering if you would be willing to make your data available to me. I'm interested in all kinds of data. Primarily things like event traces but anything you gather and YOU think is interesting in understanding YOUR system is interesting to me. I will try and incorporate/develop analysis techniques to suit. The data can be in any format though something at least semi-standard is appreciated. In particular, Pablo or ParaGraph (though didn't they switch to the Pablo format?) are good. If it is just your hacked trace output that's fine too as long as you can supply a description or code. Of course, I would tell you anything I find out about your system but at this point I can't promise any thing in terms of turnaround time and or quality/depth of the results. Think of it as an investment in future coding efforts since hopefully you will be contributing to the creation of a useful toolset. Thanks Jeff McAffer -- ato de, |m -- Real facts, real cheap! Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.theory,comp.parallel From: Michael Fourman Subject: Lectureship in Computer Science, University of Edinburgh Sender: UseNet News Admin Organization: Department of Computer Science, University of Edinburgh Date: Tue, 21 Dec 1993 11:02:30 GMT Apparently-To: comp-parallel@uknet.ac.uk University of Edinburgh Lectureship Applications are invited for a five-year lectureship available from January 1994. Applicants should be qualified to Ph.D. level and should be able to teach across a range of topics and levels (including first-year)within the subject. For this post, preference will be given to candidates with research interests in algorithms and complexity, but candidates with excellent records in other areas are also encouraged to apply. A number of other temporary posts may become available during the next 18 months; applications submitted now will also be considered for these posts. The successful candidate will be expected to strengthen existing research in the Department which currently has major interests in in a broad range of theoretical topics through the work of the Laboratory for Foundations of Computer Science (LFCS), and in parallel computing through its links with the Edinburgh Parallel Computing Centre (EPCC), and a small but internationally renowned group working in computational complexity. The successful candidate must be prepared to contribute to teaching at all levels and will be expected to carry out fundamental research. The Department has a very high reputation for the quality of both its teaching and research, and has excellent facilities which include over 250 workstationms and access to a Connection Machine and a Meiko Computing Surface in EPCC. The existing staff complement consists of 26 lecturing staff (including 5 professors) and over 20 research workers, supported by computing officers, technical and secretarial staff. Initial salary will be on the Lecturer A scale #13,601 - 18,855 with placement according to age, qualifications and experience. Further particulars may be obtained by writing to : The Personnel Office University of Edinburgh 1 Roxburgh Street Edinburgh EH8 9TB to whom applications should be sent before the closing date of 1st February 1994, or by e-mail from Cindy McGill . -- ------------------------------------------------------------------------------- Prof. Michael P. Fourman, Laboratory for Foundations of Computer Science, University of Edinburgh, Scotland, UK. email:Michael.Fourman@lfcs.ed.ac.uk Newsgroups: uk.events,uk.announce,comp.sys.transputer,news.announce.conferences Sender: news@unixfe.rl.ac.uk To: comp-parallel@uunet.UU.NET Path: usenet From: andrew@inf.rl.ac.uk (Andrew McDermott) Subject: WTC '94 CALL FOR PAPERS Date: 17 Dec 1993 12:07:26 GMT Organization: RAL, Chilton, Didcot, England Reply-To: sch@inf.rl.ac.uk Nntp-Posting-Host: cork.inf.rl.ac.uk WTC '94 CALL FOR PAPERS AND TUTORIAL PROPOSALS Villa Erba, Cernobbio, Lake Como, Italy 5 - 7 September, 1994 The Transputer Consortium (TTC) is pleased to announce that the WORLD TRANSPUTER CONGRESS 1994 (WTC '94) will be held on 5 - 7 September 1994 at the Villa Erba, Cernobbio, Lake Como, Italy . WTC '94 is the leading international transputer conference and exhibition and is the second in a series sponsored by and run under the overall management of TTC. SGS-Thomson is also sponsoring WTC '94. It is planned that each year WTC will be held in conjunction with a local partner. For the first, highly successful, WTC '93, the local partner was the German Transputer-Anwender-Treffen (TAT) conference. WTC '93, held at the Eurogress Conference Centre in Aachen, Germany, attracted 475 delegates from 32 countries worldwide. WTC '94 will be held in conjunction with the Italian Transputer User Group (ItTUG), which is due to be formed in early 1994. WTC '94 will incorporate the Inaugural Meeting of ItTUG. WTC '94 will be the first major conference where significant applications of the new T9000 transputer and its associated technologies (e.g. packet routers) will be extensively reported. OBJECTIVES * present `state-of-the-art' research on all aspects of parallel computing based upon communicating process architectures; * to demonstrate `state-of-the-art' products and applications from as wide a range of fields as possible; * to progress the establishment of international software and hardware standards for parallel computing systems; * to provide a forum for the free exchange of ideas, criticism and information from a world audience gathered from Industry, Commerce and Academia; * to promote an awareness of how transputer technologies may be applied and their advantages over other sequential and parallel processors; * to establish and encourage an understanding of the new software and hardware technologies enabled by the transputer, especially the new T9000 processor and C104 packet router from INMOS, the parallel DSP engines from Texas Instruments, and new products from Intel and other manufacturers. The conference themes will include: education and training issues, formal methods and security, performance and scalability, porting existing systems, parallelisation paradigms, tools, programming languages, support environments, standards and applications. Applications include: embedded real-time control systems, workstations, super-computing, consumer products, artificial intelligence, databases, modelling, design, data gathering and the testing of scientific or mathematical theories. BACKGROUND The World Transputer Congress (WTC) series was formed in 1992 from the merger of the TRANSPUTING series of conferences, organised by the worldwide occam and Transputer User Groups, and the TRANSPUTER APPLICATIONS series of conferences, organised by the UK SERC/DTI Transputer Initiative. WTC '93 attracted a large and enthusiastic audience from the majority of countries where transputer technology is accepted and/or parallel processing is seen as the key to meeting future computing demands. There is clearly a continuing, and growing, interest and commitment to this technology which will rely on the WTC series to maintain the vital information flow. It is reasonable to assume that it has already established itself as the leading conference in this important area. The successes of its predecessors has been a major factor in this. The continuing and vital support of TTC and the large number of User Groups from around the world will ensure a continuing success story for WTC. FORMAT The format adopted for WTC '93 will be continued at WTC '94. There will be a mix of Plenary Sessions, with Keynote and Invited Speakers from around the world, and Parallel Sessions, one of which will be organised by ItTUG. The exact number of Parallel Streams will be dependent on the quality of papers submitted against this Call for Papers. LOCATION WTC '94 will be held at the Villa Erba Conference and Exhibition Centre, Cernobbio, Lake Como, Italy. Cernobbio is 4KM from Como. The modern complex offers unique conference and exhibition facilities providing a main conference hall, meeting rooms and reception halls together with an exhibition area which can be divided into a maximum of 280 stands. It is set in the beautiful landscaped grounds of the Villa Erba on the shores of the lake. The Mannerist style Villa, with its steps down to the lake, was built in 1892 and is of both historic and artistic importance. ACCOMMODATION A range of hotel accommodation (2*, 3* and 4*) has been reserved for WTC '94 in Cernobbio and Como. The majority of these hotels are within easy walking distance of the Villa Erba. However there is a limit to the total number of rooms available in the town, so early booking is recommended. Details will be sent, as soon as they are available, to all people who register their interest in WTC '94 by returning the reply slip at the end of this announcement. GETTING THERE Como has excellent air, rail and road access, being within easy reach of two international airports, the main motorways and the trans-European rail networks. The two International Airports are Milan (Linate) and Lugano (Agno). Although many more international flights arrive at Milan, special arrangements are being negotiated with Crossair for flights to and from Lugano. Crossair flights connect with international flights at many major European airports. Travelling times by road to Como are 20 minutes from Milan and 15 minutes from Lugano. Buses will be provided for delegates, serving both airports. There is a frequent rail service from Milan to Como and regular buses from Como to Cernobbio. Fuller details will be sent, as soon as they are available, to people who register their interest in WTC '94. EXHIBITION An associated exhibition attracting the world's leading suppliers of transputer-based and other relevant hardware, software and application products will be held at the Villa Erba Exhibition Centre. The WTC '93 Exhibition was viewed as a great success by both exhibitors and participants alike and attracted a large number of visitors. Companies and other organisations wishing to exhibit at the WORLD TRANSPUTER CONGRESS 1994 should contact one of the Committee members listed at the end of this announcement. Opportunities will also exist for posters and demonstrations of academic achievements. CALL FOR PAPERS The conference programme will contain invited papers from established international authorities together with submitted papers. The International Programme Committee, presided over by Ing. A De Gloria (University of Genoa), Dr S C Hilton (TTC), Dr M R Jane (TTC), Dr D Marini (University of Milan) and Professor P H Welch (WoTUG), is now soliciting papers on all areas described above. All papers will be fully refereed in their final form. Only papers of high excellence will be accepted. The proceedings of this conference will be published internationally by IOS Press and will be issued to delegates as they register at the meeting. BEST PAPER AWARD The award for the best paper (worth approximately #500) will be based on both the submitted full paper for refereeing and the actual presentation at the Conference. Members of the Programme Committee will be the judges and their decision will be final. The winner will be announced and the presentation made in the final Closing Session on Wednesday, 7 September. PROGRAMME COMMITTEE MEMBERS The Programme Committee consists of invited experts from Industry and Academia, together with existing members from the joint organising user- groups based in Australia, France, Germany, Hungary, India, Italy, Japan, Latin America, New Zealand, North America, Scandinavia and the United Kingdom. The refereeing will be spread around the world to ensure that all points of view and expertise are properly represented and to obtain the highest standards of excellence. INSTRUCTIONS TO AUTHORS Four copies of submitted papers (not exceeding 16 pages, single-spaced, A4 or US 'letter') must reach the Committee member on the contact list below who is closest to you by 1 March 1994. Authors will be notified of acceptance by 24 May 1994. Camera-ready copy must be delivered by 23 June 1994, to ensure inclusion in the proceedings. A submitted paper should be a draft version of the final camera-ready copy. It should contain most of the information, qualitative and quantitative, that will appear in the final paper - i.e. it should not be just an extended abstract. CALL FOR TUTORIALS AND WORKSHOPS Before the World Transputer Congress 1994, we shall be holding tutorials on the fundamental principles underlying transputer technologies, the design paradigms for exploiting them, and workshops that will focus directly on a range of specialist themes (e.g. real-time issues, formal methods, AI, image processing ..). The tutorials will be held on 3 - 4 September 1994 in the Villa Erba itself. We welcome suggestions from the community of particular themes that should be chosen for these tutorials and workshops. In particular, we welcome proposals from any group that wishes to run such a tutorial or workshop. A submission should outline the aims and objectives of the tutorial, give details of the proposed programme, anticipated numbers of participants attending (minimum and maximum) and equipment (if any) needed for support. Please submit your suggestions or proposals to one of the Committee members listed below by 1 March 1994. DELIVERY AND CONTACT POINTS Dr Mike Jane, The Transputer Consortium, Informatics Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, UK Phone: +44 235 445408; Fax: +44 235 445893; email: mrj@inf.rl.ac.uk Dr Daniele Marini, Department of Computer Science, University of Milan, Via Comelico,39, Milan 20135, ITALY. Phone: +39 2 5500 6358; Fax: +39 2 5500 6334 email:marini@imiucca.csi.unimi.it Mr David Fielding, Chair, NATUG, Cornell Information Technologies, 502 Olin Library, Cornell University, Ithaca NY 14853, USA Phone: +1 607 255 9098; Fax: +1 607 255 9346 email: fielding@library.cornell.edu Dr Kuninobu Tanno, Department of Electrical and Information Engineering, Yamagata University, Yonezawa, Yamagata 992, JAPAN Phone: +81 238 22 5181; Fax: +81 238 26 2082 email: tanno@eie.yamagata-u.ac.jp Mr John Hulskamp, Department of Computer Systems Engineering, RMIT, G.P.O. Box 2476V, Melbourne, 3001 AUSTRALIA Phone: +61 3 660 5310 ; Fax: +61 3 660 5340; email: jph@rmit.edu.au Dr Rafael Lins, Chair, OUG-LA, Department de Informatica, UFPE - CCEN, Cidade Universitaria, Recife - 50739 PE BRAZIL Phone: +55 81 2718430; Fax: +55 81 2710359; email: rdl@di.ufpe.br FOR FURTHER INFORMATION PLEASE CONTACT: Dr Susan C Hilton Building R1 Rutherford Appleton Laboratory CHILTON, DIDCOT, OXON. OX11 0QX UK Phone: +44 235 446154 Fax: +44 235 445893 email sch@inf.rl.ac.uk Newsgroups: comp.parallel.pvm,comp.sys.super,comp.sys.dec,comp.sys.intel,comp.sys.convex,comp.unix.cray To: comp-parallel@uunet.UU.NET Path: digex.net!not-for-mail From: marc@access.digex.net (Marc E. Cotnoir) Subject: Codes ported for message passing??? Date: 19 Dec 1993 21:43:44 -0500 Organization: Express Access Online Communications, Greenbelt, MD USA There has been much discussion about the implementation and comparative performance of message passing systems such as CHAMELEON, p4, PVM, PARMACS and (soon-to-be) MPI, for the development of parallel programs on multi-processor systems and clustered workstations. But how many real codes are being or have been of ported to these environments? I am thinking specifically of widely used codes for applications such as finite element, molecular modelling, cfd and such. If you are currently involved in such porting work, or know of centers where these activities are on-going, I'd like to receive this information. Please forward details regarding what codes are being (or have been) ported, which message passing system is used and the sites or contact details where this work is going on (if known). Please email to the address below and if the response is high I will summarize to this newsgroup. Thanks in advance for your help. Regards Marc marc@access.digex.net Newsgroups: comp.os.linux.announce,tum.info.soft,tum.soft,comp.parallel Path: pleierc From: pleierc@Informatik.TU-Muenchen.DE (Christoph Pleier) To: comp-parallel@news.Germany.EU.net Subject: Distributed C Development Environment now available for LINUX and UNICOS Keywords: distributed programming, parallel programming, Distributed C Originator: pleierc@hpeick9.informatik.tu-muenchen.de Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany Date: Mon, 20 Dec 1993 14:30:25 +0100 Distributed C Development Environment now available for LINUX and UNICOS The Distributed C Development Environment is now available for AT 386/486 running LINUX and for Cray supercomputers running UNICOS. The environment is stored at ftp.informatik.tu-muenchen.de in the directory /local/lehrstuhl/eickel/Distributed_C. You can get it by anonymous ftp. ----- The Distributed C Development Environment was developed at Technische Universitaet Muenchen, Germany, at the chair of Prof. Dr. J. Eickel and is a collection of tools for parallel and distributed programming on single- processor-, multiprocessor- and distributed-UNIX-systems, especially on heterogenous networks of UNIX computers. The environment's main purpose is to support and to simplify the development of distributed applications on UNIX networks. It consists of a compiler for a distributed programming language, called Distributed C, a runtime library and several useful tools. The programming model is based on explicit concurrency specification in the programming language DISTRIBUTED C, which is an extension of standard C. The language constructs were mainly taken from the language CONCURRENT C developed by N. Gehani and W. D. Roome and are based on the concepts for parallel programming implemented in the language ADA. Distributed C makes possible the common programming in C together with the user-friendly programming of process management, i. e. the specification, creation, synchronization, communication and termination of concurrently executed processes. The Distributed C Development Environment supports and simplifies the dis- tributed programming in several ways: o Development time is reduced by checking Distributed C programs for errors during compilation. Because of that, errors within communication or synchronization actions can be easier detected and avoided. o Programming is simplified by allowing the use of simple pointer types even on loosely-coupled systems. This is perhaps the most powerful feature of Distributed C. In this way, dynamic structures like chained lists or trees can be passed between processes elegantly and easily - even in heterogeneous networks. Only the anchor of a dynamic structure must be passed to another process. The runtime system automatically allocates heap space and copies the complete structure. o Developement is user-friendly by supporting the generation and installation of the executable files. A special concept was developed for performing the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. o Programming difficulty is reduced by software-aided allocating processes at runtime. Only the system administrator needs to have special knowledge about the target system's hardware. The user can apply tools to map the processes of a Distributed C program to the hosts of a concrete target system. o Execution time is reduced by allocating processes to nodes of a network with a static load balancing strategy. o Programming is simplified because singleprocessor-, multiprocessor- and distributed-UNIX-systems, especially homogeneous and heterogeneous UNIX- networks can be programmed fully transparently in Distributed C. The Distributed C Development Environment consists mainly of the tools: o Distributed C compiler (dcc): compiles Distributed C to standard C. o Distributed C runtime library (dcc.a): contains routines for process creation, synchonization, ... o Distributed C administration process (dcadmin): realizes special runtime features. o Distributed C installer program (dcinstall): performes the generation and storage of binaries by local and remote compilation in heterogeneous UNIX-networks. The environment runs on the following systems: o Sun SPARCstations (SunOS), o Hewlett Packard workstations (HP/UX), o IBM workstations (AIX), o IBM ATs (SCO XENIX, SCO UNIX, LINUX), o Convex supercomputers (ConvexOS), o Cray supercomputers (Unicos), o homogeneous and heterogeneous networks of the systems as mentioned above. Moreover the implementation was designed for the use on Intel iPSC/2s. The Distributed C Development Environment source code is provided "as is" as free software and distributed in the hope that it will be useful, but without warranty of any kind. -- Christoph Pleier pleierc@informatik.tu-muenchen.de Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: janr@fwi.uva.nl (Jan de Ronde) Subject: Hockney's f(1/2) Date: 21 Dec 1993 15:57:41 GMT Organization: FWI, University of Amsterdam Nntp-Posting-Host: wendy.fwi.uva.nl Summary: How to derive r(inf) and n(1/2) arithmetic - memory refer. non overlaap Keywords: characterization of performance Dear all, I'm currently reading various articles of Roger Hockney all concerning the performance characterization using n(1/2), etc... Including the book PARALLEL Computers 2. I'm looking for the derivation of the forms for the r(infinity) and n(1/2) for an arithmetic pipeline in which the peak performance cannot be realized due to the data transfer to and from memory. The situation of a memory bottleneck can be approximately modelled by considering a memory acces pipeline desribed by the parameters (r(inf)m, n(1/2)m)feeding data to a local memory, from which an arithmetic pipeline operates described by the parameters (r(inf)a, n(1/2)a). He says that when one is intersted in the average performance (r(inf), n(1/2)) of the combined memory and arithmetic pipeline, and it is not possible to overlap memory transfers with arithmetic, a little algebra shows that: r(inf)= r(inf)a/(1+f(1/2)/f) = r(peak) pipe (f/f(1/2)) and n(1/2)= (n(1/2)m + x n(1/2)a)/ (1+x) where r(peak) = r(inf)a f(1/2) = r(inf)a/r(inf)m x = f/f(1/2) Is there anyone who has written this out entirely? Or knows how to? I would be grateful for response on this subject. Jan de Ronde, University of Amsterdam. Literature: Parameterization of Computer performance: R.W. Hockney, Parallel Computing (5) 1987, North Holland Parallel Computers 2 : Hockney and Jesshope (1988) (Book ) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dowd@acsu.buffalo.edu (Patrick Dowd) Subject: Advance Program - MASCOTS94 Originator: dowd@mangrove.eng.buffalo.edu Keywords: MASCOTS94 Sender: nntp@acsu.buffalo.edu Nntp-Posting-Host: mangrove.eng.buffalo.edu Reply-To: dowd@eng.buffalo.edu Organization: State University of New York at Buffalo Date: Tue, 21 Dec 1993 22:11:16 GMT Apparently-To: comp-parallel@cis.ohio-state.edu Attached below the advance program of MASCOTS'94 which will be held January 31 - February 2, 1994 in Durham, NC. The conference will be held in the Washington Duke Inn and Golf Club. Note that hotel reservations must be made prior to January 5 to guarantee reservations at the low conference rate of $80/day. The hotel telephone numbers are +1.919.490.0999 or +1.800.443.3853. Conference registration information is included at the end of this note, or contact Margrid Krueger at mak@ee.duke.edu for additional registration information ---------------------------------------------------------------------- International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems (MASCOTS '94) Advance Program Registration and Hotel Information Cosponsored by the IEEE (Computer Society, TCCA, TCSIM) In Cooperation with the ACM (SIGSIM, SIGARCH, SIGMETRICS) IFIP WG 7.3, and SCS January 31-February 2, 1994 Washington Duke Inn and Golf Club Durham, N.C. 27706 (919)490-0999 or (800)443-3853 Monday January 31, 1994 ======================= Welcome Address from General and Program Chairs (8:00 - 8:15AM) Session 1a: Computer Systems Chair: D. Schimmel, Georgia Tech Allen Room (8:15 - 9:35AM) "Trace-Driven Simulation of Data-Alignment and Other Factors Affecting Update and Invalidate Based Coherent Memory," E.P. Markatos and C.E. Chronaki "On Predictablility of Caches for Real-Time Applications," J-C. Liu and S.M. Shahrier "Evaluation of Write-Back Caches for Multiple Block-Sizes," Y. Wu "Modeling Power Management for Hard Disks," P.M. Greenawalt Session 1b: Computer Communications Networks Chair: P. Dowd, State University of New York at Buffalo Duke Room (8:15 - 9:35AM) "Analytic Models and Characteristics of Video Traffic in High Speed Networks," A.W. Bragg and W. Chou "An Analysis of Space Priority Queueing in ATM Switches," K.L. Reid and R.B. Bunt "A Performance Study of Photonic Local Area Network Topologies," K.M. Sivalingam and P.W. Dowd "On the Interaction Between Gateway Scheduling and Routing," I. Matta and A.U. Shankar Session 2a: Computer Performance Modeling Chair: K. Trivedi, Duke University Allen Room (9:50 - 11:10AM) "Approximate Analysis of a Multi-Class Open Queueing Network with Class Blocking and Push-out," T. Atmaca, H.G. Perros, and Y. Dallery "Analytic Performance Estimation of Client-Server Systems with Multi-Threaded Clients," D.C. Petriu, S. Majumdar, J. Lin, and C. Hrischuk "A Case Study of Scientific Application I/O Behavior," B.K. Pasquale and G.C. Polyzos "The Feasibility of Using Compression to Increase Memory System Performance," J. Wang and R.W. Quong Session 2b: Interconnection/Networks Chair: R. Fujimoto, Georgia Tech Duke Room (9:50 - 11:10AM) "Design and Simulations of a Serial-Link Interconnection Network for a Massively Parallel Computer System," H. Sharif, H. Vakilzadian, and H. Jiang "Modeling Adaptive Routing in k-ary n-cube Networks," W.A. Najjar, A. Lagman, S. Sur, and P.K. Srimani "Conflict Analysis of Multistage Interconnection Networks," Y.R. Potlapalli and D.P. Agrawal "Complete Exchange on a Wormhole Routed Mesh," R. Thakur, A. Choudhary, and G. Fox Invited Talk 1 Duke Room (11:10 - 11:55AM) "Distributed Simulation of Large-Scale PCS Networks," C.D. Carothers, R.M. Fujimoto, Y-B. Lin, and P. England --------------------------------------------------------------------- Lunch (12:00 - 1:00PM) --------------------------------------------------------------------- Session Tools T-1: Simulation of Robotics and Process Control Chair: Thomas Braunl, University of Stuttgart, Germany Duke Room (1:05 - 2:25PM) "From Simulation to Virtual Reality: A Robotic Application," M. Gerke, R. Dicken, and H. Hoyer "MCEMS Toolbox _ A Hardware-in-the-Loop Simulation Environment for Mechatronic Systems," H.-J. Herpel, M. Held, and M. Glesner "The 3d7-Simulation Environment: A Tool for Autonomous Mobile Robot Development," R. Trieb and E. von Puttkamer Session Tools T-2: Architecture and Network Simulation Chair: Manu Thapar, Hewlett Packard Research Labs Duke Room (2:35 - 3:55PM) "Hierarchical Architecture Design and Simulation Environment," F.W. Howell, R. Williams, and R.N. Ibbett "Animated Simulations of Media Access Protocols in Local Area Networks," T. Uhl and J. Ulmer "BONeS DESIGNER: A Graphical Environment for Discrete-Event Modeling and Simulation," S.J. Schaffer and W.W. LaRue Session Tools T-3: Performance Analysis and Debugging Chairs: Manu Thapar and Thomas Braunl, H.P. Research Labs and U. Stuttgart, Germany Duke Room (4:05 - 5:35PM) "A Toolkit for Advanced Performance Analysis," A. Waheed, B. Kronmuller, R. Sinha, and D.T. Rover "An Interactive Tool for Design, Simulation, Verification, and Synthesis of Protocols," D.Y. Chao and D.T. Wang "On-the-Fly Visualization and Debugging of Parallel Programs," M.C. Hao, A.H. Karp, M. Mackey, V. Singh, and J. Chien Session 3: Performance Modeling and Simulation Chair: C. Andre, Universite de Nice-Sophia Antipolis, France Allen Room (4:05 - 5:35PM) "A Matrix Approach to Performance Data Modeling, Analysis and Visualization," A. Waheed, B. Kronmuller, and D.T. Rover "FAST: A Functional Algorithm Simulation Testbed," M.D. Dikaiakos, A. Rogers, and K. Steiglitz "Simulation of Temporal Behaviour Based on a Synchronous Language," C. Andre and M.A. Peraldi "Xmgm: Performance Modeling Using Matrix Geometric Techniques," B.R. Haverkort, A.P.A. van Moorsel, D-J. Speelman Posters Session Chair: V. Madisetti, Georgia Tech Allen Room (1:00 - 3:00PM) "Experiment and Performance Evaluation of a Distributed Collaboration System," L-T. Shen, G. Memmi, P. Petit, and P. Denimal "Trace-Driven and Program-Driven Simulation: A Comparison," B.A. Malloy "Towards the Automatic Derivation of Computer Performance Models from the Real Time and Embedded Systems Design," R. Puigjaner and J. Szymanski "A Programmable Simulator for Analyzing the Block Data Flow Architecture," S. Alexandre, W. Alexander, and D.S. Reeves "Colored Petri Net Methods for Performance Analysis of Scalable High-Speed Interconnects," L. Cherkasova, A. Davis, V. Kotov, and T. Rokicki "Detecting Latent Sector Faults in Modern SCSI Disks," H.H. Kari, H. Saikkonen, and F. Lombardi "Performance of Output-Multibuffered Multistage Interconnection Networks Under Non-Uniform Traffic Patterns," B. Zhou and M. Atiquzzaman "Visualization of Network Performance Using the AVS Visualization System," R.O. Cleaver and S.F. Midkiff "Coreference Detection in Automatic Analysis of Specifications," S. Shankaranarayanan and W. Cyre "Visual Feedback for Validation of Informal Specifications," A. Thakar and W. Cyre "Discrete Time Open Queueing Networks with Feedback, Bulk Arrivals and Services," C. Vu Duy "A Maximum Entropy Analysis of the Single Server Queue," D. Frosch-Wilke and K. Natarajan "A Methodology for Generation and Collection of Multiprocessor Traces," P.J. Bond, B.C. Kim, C.A. Lee, and D.E. Schimmel "IDtrace _ A Tracing Tool for i486 Simulation," J. Pierce and T. Mudge "Stochastic Bounds on Execution Times of Parallel Computations," F. Lo Presti, M. Colajanni, and S. Tucci Invited Talk 2 Allen Room (3:00 - 3:45PM) "TITLE: TBA," Debasis Mitra Reception (7:30PM) Duke/Allen Room Tuesday February 1, 1994 ======================== Session 4a: Multiprocessor Systems Chair: D.P. Agrawal, North Carolina State University Allen Room (8:15 - 9:35AM) "Modeling Data Migration on CC-NUMA and CC-COMA Hierarchical Ring Architectures," X. Zhang and Y. Yan "ES: A Tool for Predicting the Performance of Parallel Systems," J.B. Sinclair and W.P. Dawkins "Performance of Multiple-Bus Multiprocessor under Non-Uniform Memory Reference," M.A. Sayeed and M. Atiquzzaman "VHDL Modeling for the Performance Evaluation of Multicomputer Networks," J.T. McHenry and S.F. Midkiff Session 4b: Tools and Interfaces Chair: B. Malloy, Clemson University Duke Room (8:15 - 9:35AM) "Scalability Analysis Tools for SPMD Message-Passing Parallel Programs," S.R. Sarukkai "Automated Modeling of Message-Passing Programs," P. Mehra, M. Gower, and M.A. Bass "A Flexible Graphical User Interface for Performance Modeling," Y-B. Lin and D. Daly Session 5a: Efficient Simulation Mechanisms Chair: G. Kesidis, University of Waterloo, Canada Allen Room (9:50 - 11:10AM) "MINT: A Front End for Efficient Simulation of Shared-Memory Multiprocessors," J.E. Veenstra and R.J. Fowler "Analysis of Memory and Time Savings Using EC/DSIM," G. Hermannsson, A. Li, and L. Wittie "A Performance Study of the RPE Mechanism for PDES," J.E. Butler and V.E. Wallentine Session 5b: Network Simulation and Design Chair: K. Bagchi, Stanford University Duke Room (9:50 - 11:10AM) "Object-Oriented Modeling, Simulation and Implementation of a Network Management System," M. Beckers, J. Peeters, and F. Verboven "Synchronous Digital Hierarchy Network Modeling," H.L. Owen "The Knitting Technique and Its Application to Communication Protocol Synthesis," D.Y. Chao and D.T. Wang "Genetic Algorithm and Neural Network Approaches to Local Access Network Design," T. Routen Invited Talk 3 Duke Room (11:10 - 11:55PM) "Time Constrained Message Transmission in a LAN Environment," S.K. Tripathi and S. Mukherjee --------------------------------------------------------------------- Lunch (12:00 - 1:00PM) --------------------------------------------------------------------- Invited Talk 4 Duke Room (1:10 - 2:00PM) "Evaluating Memory System Performance of a Large-Scale NUMA Multiprocessor," K. Harzallah and K.C. Sevcik Panel Session 1 Duke Room (2:00 - 3:00PM) "Open Problems in Modeling, Simulation and Modeling of Communication Networks" Panel Session 2 Allen Room (2:00 - 3:00PM) "Open Problems in Modeling Simulation and Analysis of Computer Systems" Invited Talk 5 Duke Room (3:00 - 3:45PM) "Parallel Simulation of Markovian Queueing Networks," P. Heidelberger and D.M. Nicol Invited Talk 6 Duke Room (3:50 - 4:35PM) "Modeling, Performance Evaluation, and Ordinal Optimization of Integrated Voice/Data Networks," J.E. Wieselthier, C.M. Barnhart, and A. Ephremides Invited Talk 7 Duke Room (4:45 - 5:30PM) "TITLE: TBA," John Tsitsiklis Dinner (8:00PM) Center Room Invited Talk 8 (Over Dinner) "G-Networks with Multiple Class Negative and Positive Customers," J.M. Fourneau, E. Gelenbe, and R. Suros Wednesday February 2, 1994 ========================== Session 6a: Analytical Models and Solutions Chair: J. Walrand, University of California at Berkeley Allen Room (8:15 - 9:35AM) "Dynamic Load Balancing in Distributed Systems," E. Gelenbe and R. Kushwaha "Adaptive Bidding Load Balancing Algorithms in Heterogeneous Distributed Systems," Y. Zhang, H. Kameda, and K. Shimizu "On Solving Stochastic Coupling Matrices Arising in Iterative Aggregation/Disaggregation Methods," W.J. Stewart and A. Touzene "An Analytical Model for the Binary Feedback Scheme," W. Liu, B. Stephens, and E.K. Park Session 6b: Protocol Modeling and Simulation Chair: A. Bianco, Politecnico di Torino, Italy Duke Room (8:15 - 9:35AM) "Modeling and Evaluating the DQDB Protocol with Stochastic Timed Petri Nets," R.L.R. Carmo and G. Juanole "Efficient Technique for Performance Analysis of Locking Protocols," S.L. Hung, K.W. Lam, and K.Y. Lam "Visualization and Performance Analysis of Formally Specified Communication Protocols," M. Walch, A. Wolisz, and J. Wolf-Gunther "Integrating Performance Analysis in the Context of LOTOS-Based Design," M. Ajmone Marsan, A. Bianco, L. Ciminiera, R. Sisto, and A. Valenzano Invited Talk 9 "Markov Reward Approach to Performability and Reliability Analysis," K.S. Trivedi, M. Malhotra, and R.M. Fricks Duke Room (9:50 - 10:35AM) Session 7a: Performance Analysis Chair: G. Juanole, LAAS-CNRS, France Allen Room (10:40 - 12:00AM) "Performance Comparison Between TCP Slow-Start and a New Adaptive Rate-Based Congestion Avoidance Scheme," L. Huynh, R-F. Chang, and W. Chou "Near-Critical Path Analysis of Program Activity Graphs," C. Alexander, D. Reese, and J. Harden "Modeling to Obtain the Effective Bandwidth of a Traffic Source in an ATM Network," G. Kesidis "A Comparison of Different Wormhole Routing Schemes," Y-W. Lu, K. Bagchi, J.B. Burr, and A.M. Peterson Session 7b: Petri-Nets and Applications Chair: E. Gelenbe, Duke University Duke Room (10:40 - 12:00AM) "An Object Oriented Approach in Building an Environment for Simulation and Analysis Based on Timed Petri Nets with Multiple Execution Policies," G. Manduchi and M. Moro "Methodology for LAN Modeling and Analysis Using Petri Nets Based Models," N. Berge, M. Samaan, G. Juanole, and Y. Atamna "Simulation of Marked Graphs on SIMD Architectures Using Efficient Memory Management," H. Sellami, J.D. Allen, D.E. Schimmel, and S. Yalamanchili ====================================================================== GENERAL CHAIRS Erol Gelenbe Jean C. Walrand Electrical Engineering Electrical Engineering & CS Duke University University of California Durham, NC 27708-0291 Berkeley, CA 94720 USA PROGRAM CHAIR Vijay K. Madisetti School of Electrical Engineering Georgia Institute of Technology 777 Atlantic Drive Atlanta, GA 30332-0250 TOOLS FAIR CHAIRS Thomas Braunl Manu Thapar IPVR HP Research Labs. U Stuttgart, Breitwiesenstr. 20-22 1501 Page Mill Road D-7000 Stuttgart 80, Germany Palo Alto CA 94304, USA PROGRAM COMMITTEE MEMBERS Dharma Agrawal* (NCSU, USA) Charlie Knadler (IBM Rockville, USA) Kallol Bagchi* (AUC, Denmark) Anup Kumar (U Louisville, USA) Nader Bagherzadeh (UCI, USA) Benny Lautrup (Bohr Institute, Denmark) M. Bettaz (U Constantine, Algeria) Darrell Long (UCSC, USA) Thomas Braunl (U Stuttgart, Germany) Vijay Madisetti (Georgia Tech, USA) Jim Burr (Stanford U, USA) Guenter Mamier (U Stuttgart, Germany) Tom Casavant (U Iowa, USA) M Ajmone Marsan (Poly Torino, Italy) Giovanni Chiola* (U Torino, Italy) Ben Melamed (NEC Princeton, USA) Doug DeGroot* (TI, USA) Tuncer Oren (U Ottawa, Canada) Patrick Dowd* (SUNY-Buffalo, USA) Mary Lou Padgett (Auburn U, USA) Ed Deprettere (U Delft, Denmark) Gerardo Rubino (INRIA, France) Larry Dowdy (Vanderbilt U, USA) Herb Schwetman* (MCC, USA) Michel Dubois (USC, USA) Alan J. Smith (UC Berkeley, USA) Serge Fdida (U Rene Descartes, France) L. Spaanenburg (U Stuttgart, Germany) Paul Fishwick (U Florida, USA) Shreekant Thakkar (Sequent, USA) Jean Fourneau (Lab-MASI, France) Manu Thapar (HP Palo Alto, USA) Rhys Francis (CSIRO, Australia) Kishor Trivedi* (Duke U, USA) Geoffrey Fox (Syracuse U, USA) Hamid Vakilzadian (U Nebraska, USA) Erol Gelenbe (Duke U, USA) Jean Walrand (UC Berkeley, USA) Mary Girard (MITRE Corp., USA) Peter Wilke (U Erlangen, Germany) Dave Harper (U Texas, USA) Steve Winter (Poly. C. London, UK) Mark Holliday (Duke U,USA) Felix Wu (UC Berkeley, USA) Bob Jump (Rice U, USA) Bernie Zeigler (U Arizona, USA) Charlie Jung (IBM Kingston, USA) George Zobrist (U Missouri-Rolla, USA) George Kesidis (U Waterloo, Canada) Gianfranco Ciardo (College William and Mary, USA) (* Steering Committee Member) ====================================================================== Registration Form MASCOTS'94 January 31-February 2, 1994 Washington Duke Inn and Golf Club Durham, N.C. 27706 (919) 490-0999, (800) 443-3853 The registration fee of $300 includes the Proceedings published by the IEEE Computer Society Press, admission to tutorials, the conference banquet, and coffee breaks. Student registration fee is $150. Additional copies of the Proceedings may be purchased at the conference. Pre-registered participants benefit from the special hotel rate of $80 per night at the elegant Washington Duke Inn and Golf Club. Reservations must be received by January 5 to obtain this discounted rate. Mention MASCOTS'94 when you call the hotel at +1.919.490.0999 or +1.800.443.3853. Please Return To: Ms. Margrid Krueger Department of Electrical Engineering Box 90291 Duke University Durham, N.C. 27708-0291 (or email at mak@ee.duke.edu) Name: ................................................................* Affiliation: .........................................................* Address: .............................................................* ......................................................................* ......................................................................* ......................................................................* ......................................................................* Email Address: .......................................................* Telephone: ...........................................................* Fax: .................................................................* The registraqtion fee of $300 (by check drawn on a U.S. bank and made out to Duke University), or $150 for students, should be enclosed. ====================================================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: N. Branan Subject: APR News Release Organization: Intel Supercomputer Systems Division NEWS RELEASE Applied Parallel Research, Inc. 550 Main St., Suite I Placerville, CA 95667 Sales and Marketing (301) 718-3733 Fax: (301) 718-3734 FOR IMMEDIATE RELEASE Applied Parallel Research is pleased to announce the signing of a distribution agreement with Intel's Supercomputer Systems Division for the FORGE family of Fortran parallelization products. Placerville, California, USA, December 9, 1993 -- Applied Parallel Research, Inc. (APR), and Intel!s Supercomputer Systems Division (SSD) have signed an agreement whereby Intel SSD will offer APR's FORGE( family of distributed memory Fortran parallelization tools for the Intel Paragon* and iPSC(/860 supercomputers. Under this agreement APR's products FORGE Explorer, FORGE Distributed Memory Parallelizer (DMP), the FORGE HPF pre-compiler (xHPF) and the newly announced FORGE Magic/DM automatic parallelization system will be available from Intel SSD. Robert Enk, Vice President of Sales and Marketing for APR, stated, "We are extremely pleased that Intel SSD has chosen to offer APR's products in conjunction with their line of supercomputer systems. Given Intel SSD's significant customer base and market presence, this agreement greatly enhances our ability to gain wide exposure and acceptance of our products. We feel that these tools will significantly enhance a programmer's ability to convert and develop applications for the Paragon and iPSC/860 systems." FORGE Explorer is a Motif based Fortran source code browser which provides extensive interprocedural control flow and variable usage information as well as context sensitive query functions. FORGE DMP and APR's newly announced FORGE Magic/DM are fully compatible interactive and batch parallelization tools. Magic/DM is the first production quality automatic parallelization tool available for distributed memory architectures. Another industry first is APR's HPF Compilation System - xHPF. The xHPF system currently supports the subset of Fortran 90 and HPF directives recommended for early implementation by the High Performance Fortran Forum. This product has also been enhanced to provide for automatic parallelization of an application and the optional generation of a source code file which has been converted from Fortran 77 to the subset of Fortran 90 and HPF Directives defined in the HPFF subset. Dr. Wilfred R. Pinfold, Director of Sales Support for Intel SSD, said, "APR's FORGE family of products will provide users of Intel's Paragon and iPSC/860 systems with the most advanced set of Fortran conversion and development tools available today. The FORGE products give users the ability to effectively distribute their application across multiple processors, and when used in conjunction with the Paragon message passing environment, allows users to take advantage of the powerful Paragon architecture." Intel's Supercomputer Systems Division has an installed base of over 400 systems, and, according to International Data Corp., leads all other vendors with a 38% share of the parallel supercomputer segment of the high performance computing market. Intel Supercomputers combine the companyUs advanced microprocessors with Intel's interconnect technology (Scalable Parallel Processing) and industry standard OSF/1AD MK operating system to deliver affordable, scalable systems. Intel supercomputers are used throughout the world for scientific, industrial and academic applications Intel Corporation, the world's largest chip maker, is an international manufacturer of microcomputer components, modules and systems. Applied Parallel Research provides leading edge Fortran programming tools and training to users of todayUs advanced parallel processing systems. iPSC is a registered trademark and Paragon is a trademark of Intel Corporation. FORGE is a registered trademark of Applied Parallel Research, Inc. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: Michael Fourman Subject: Edinburgh University Lectureship in Computer Science Organization: Department of Computer Science, University of Edinburgh Newsgroups: comp.theory,comp.parallel,misc.jobs.offered Followup-To: lmm@dcs.ed.ac.uk Apologies, in the original version of this message the email address for responses was wrong. That message has been cancelled. Here is a corrected version. University of Edinburgh Lectureship Applications are invited for a five-year lectureship available from January 1994. Applicants should be qualified to Ph.D. level and should be able to teach across a range of topics and levels (including first-year)within the subject. For this post, preference will be given to candidates with research interests in algorithms and complexity, but candidates with excellent records in other areas are also encouraged to apply. A number of other temporary posts may become available during the next 18 months; applications submitted now will also be considered for these posts. The successful candidate will be expected to strengthen existing research in the Department which currently has major interests in in a broad range of theoretical topics through the work of the Laboratory for Foundations of Computer Science (LFCS), and in parallel computing through its links with the Edinburgh Parallel Computing Centre (EPCC), and a small but internationally renowned group working in computational complexity. The successful candidate must be prepared to contribute to teaching at all levels and will be expected to carry out fundamental research. The Department has a very high reputation for the quality of both its teaching and research, and has excellent facilities which include over 250 workstationms and access to a Connection Machine and a Meiko Computing Surface in EPCC. The existing staff complement consists of 26 lecturing staff (including 5 professors) and over 20 research workers, supported by computing officers, technical and secretarial staff. Initial salary will be on the Lecturer A scale #13,601 - 18,855 with placement according to age, qualifications and experience. Further particulars may be obtained by writing to : The Personnel Office University of Edinburgh 1 Roxburgh Street Edinburgh EH8 9TB Scotland, UK to whom applications should be sent before the closing date of 1st February 1994, or by e-mail from Cindy McGill . ------------------------------------------------------------------------------- Prof. Michael P. Fourman, Laboratory for Foundations of Computer Science, University of Edinburgh, Scotland, UK. email:Michael.Fourman@lfcs.ed.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Ugur Halici Subject: AI &MATH : FINAL PROGRAMME & CALL FOR PARTICIPATION CALL FOR PARTICIPATION ------------------------------------------------------------------------- THE 3RD INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND MATHEMATICS ------------------------------------------------------------------------- January 2-5, 1994 Pier 66 Crowne Plaza Resort & Marina Ft. Lauderdale, Florida The International Symposium on Artificial Intelligence and Mathematics is the third of a biennial series featuring applications of mathematics in artificial intelligence as well as artificial intelligence techniques and results in mathematics. There has always been a strong relationship between the two disciplines; however, the contact between practitioners of each has been limited, partly by the lack of a forum in which the relationship could grow and flourish. This symposium represents a step towards improving contacts and promoting cross-fertilization between the two areas. The editorial board of the Annals of Mathematics and Artificial Intelligence serves as the permanent organizing committee for the series of Symposia. General Chair: Martin Golumbic, Bar-Ilan University Conference Chair: Frederick Hoffman Florida Atlantic University Program Co-Chairs: Zvi Kedem, NYU and Erol Gelenbe, Duke University Publicity Chair: Ugur Halici, Middle East Technical University Sponsors: The symposium is sponsored by Florida Atlantic University. Support from the Air Force Office of Scientific Research is pending. Accommodations: The symposium will be held at the Pier 66 Crowne Plaza Resort & Marina located at 2301 S.E. Seventeenth Street Causeway, Fort Lauderdale, Florida 33316. Telephone: 305-525-6666 or toll free in USA and Canada: 1-800-327-3796; In Florida: 1-800-432-1956; FAX: 305-728-3551. A block of 75 rooms is available at the Pier 66 from Sunday, January 2 to Wednesday, January 5 at the symposium rate of $95 single or double. There are only a limited number of rooms available on the evening of January 1 at the special symposium rate. Due to the holiday, there are no sleeping rooms available on December 31. All rooms must be reserved by December 2 in order to get the special symposium rate. You must contact Pier 66 Reservations by December 2 and mention the symposium to get the preferred rate. To accommodate symposium attendees coming in early for the tutorials, we have also reserved a block of rooms at the Best Western Marina Inn & Yacht Harbor which is located directly across the street and a very short walk from the Pier 66 (2150 S.E. 17th St. Causeway). We have blocked a small number of rooms at the Best Western for the evenings of December 31 at the rate of $89 single and $99 double, and 40 rooms for the evening of January 1 at the same rate. Please contact the Best Western at 305-525-3484 or toll free 1- 800-327-1390. Reservations must also be made by December 2, 1993 in order to get the special rate. Transportation: Delta Airlines will serve as the official conference airline. Arrangements have been made to allow discounts on Delta's domestic fares. Call Delta's Special Meeting Network, or have your travel agent call Delta at 1-800-241-6760 and reference our Account number for this conference: XR0081. Ft. Lauderdale International Airport is the closest to the conference site. Shuttle transportation is available from the airport to the Pier 66 and costs less than $10 each way. Although the Pier 66 is within walking distance of many fine restaurants and shops, we will also have special rates for symposium participants through Budget Rental Car. In the U.S., call 1-800-772-3773; in Canada, call 1-800-268-8900. Reference FAU's conference number, #VKR3 AIMC. Please note: there is a daily parking charge at the Pier 66. Information: For further information about the conference content, contact Frederick Hoffman, Florida Atlantic University, Department of Mathematics, PO Box 3091, Boca Raton, FL 33431, USA Phone: (407) 367-3345; FAX: (407) 367- 2436; E-mail: hoffman@acc.fau.edu or hoffman@fauvax.bitnet. For general information about registration, call the University Division of Continuing Education and Open University Programs at (407) 367-3090; FAX (407) 367-3987. Registration: The registration fee for the Symposium is $150 regular; $75 student, for registration by December 27, 1991. After December 27th, and door registration is $175 ($100 for students). Students must enclose a statement from their university with the registration form. Tutorial registrations are $50 each ($35 for students) for one tutorial or $90 for two ($65 for students). To register, compete the enclosed form and return it with your payment to: International Symposium on Artificial Intelligence and Mathematics University Continuing Education and Open University Programs Florida Atlantic University P. O. Box 3091 Boca Raton, Florida 33431-0991 Make checks payable in U.S. funds only to Florida Atlantic University. Visa or MasterCard also accepted. You may register by telephone using your credit card by calling 407-367-3092 or fax 407-367-3987. REGISTRATION FORM Artificial Intelligence & Mathematics Program #2300-100-03/NBNRBN402 Symposium Registration - Please check one: ____ Regular: Before December 27th $150 ____ Regular: After December 27th $175 ____ Student: Before December 27th $ 75 ____ Student: After December 27th $100 Tutorials: January 2, 1994 ____ One Tutorial $ 50 ____ Student Rate $ 35 ____ Two Tutorials $ 90 ____ Student Rate $ 65 Name:_____________________________________________________________________ Social Security Number: __________________________________________________ Affiliation:_______________________________________________________________ Address:___________________________________________________________________ ___________________________________________________________________________ Telephone:__________________________________ FAX:__________________________ E-mail:____________________________________________________________________ Method of payment: _____ Check ______VISA ______MC If MasterCard or Visa, Number:_____________________________________________ Expiration Date:___________________________ Name as it appears on card: _______________________________________________ Signature: ________________________________________________________________ Sunday, January 2, 1994 Tutorials This year, as part of the effort to bring practitioners of the two disciplines together to the benefit of both, we are instituting a tutorial series. We are very pleased with the quality of our first set of tutorials. "Mathematical Aspects of Causal Models" Judea Pearl, UCLA "Mathematics of Language: How to Measure the Complexity of Natural Languages" Alexis Manaster Ramer, Wayne State University Wlodek Zadrozny, IBM T.J. Watson Research Center Others to be announced Monday, January 3 through Wednesday, January 5, 1994 Invited Speakers: Christos Papadimitriou, University of California at San Diego Jack Schwartz, Courant Institute - NYU Vladimir Lifschitz, Stanford University Banquet Speaker: "Chicanery in Computer Chess: The Botvinnik Caper" Hans J. Berliner, Carnegie Mellon University Panel Discussions: There will be two important panel discussions of issues involved in the AI-Mathematics synergism. One will focus on appropriate problems and the other on relations between constraint programming and operations research. Special Sessions: Session 1: "Consistency, Redundancy, and Implied Equalities in Systems of Inequalities" Chair: Harvey J. Greenberg, University of Colorado at Denver "The Stand-and-hit algorithm for linear programming redundancy" A. Boneh, Wichita State University, S. Boneh, The Technion, Haifa, Israel, and R.Caron and S. Jibrin, University of Windsor "A fast algorithm for diagnosing infeasible network flow problems" C. Aggarwal, MIT Operations Research Center, J. Hao, GTE Labs, and J. Orlin, MIT "The use of the optimal partition to determine all implied equalities in a consistent system and an irreducible infeasible subsystem in an inconsistent system" H.J. Greenberg, University of Colorado at Denver Sessions on SAT (organized by Endre Boros and Alex Kogan) Session 2A: "Fault Tolerance in Massively Parallel Computers" Ansuman Bagchi, Brigitte Servatius, and W.Shi, Worcester Polytechnic Institute "A Fast Parallel SAT-Solver - Efficient Workload Balancing" Max Boehm and Ewald Speckenmeyer, University of Dusseldorf, Germany, "An Empirical Study of Branching Rules for Satisfiability" John Hooker, Carnegie Mellon University, V. Vinay, Centre for AI and Robotics, Bangalore, India "A GRASP for MAX-SAT" Mauricio G.C. Resende, AT&T Bell Labs, Thomas A. Feo, University of Texas at Austin Session 2B: "Some Remarks on Renaming and Satisfiability Hierarchies" Thomas Eiter, Technical University of Vienna, Austria, Pekka Kilpelainen, and Heikki Mannila, University of Helsinki. "Hierarchies of Polynomially Solvable SAT Problems" Giorgio Gallo and Danilele Pretolani, University of Pisa "Escape Mechanisms for Local Search for Satisfiability" Bart Selman and Henry Kautz, AT&T Bell Laboratories "Persistency Results in SAT and MAX-SAT" Endre Boros and Peter L. Hammer, Rutgers University Session 2C: "Experimental Results on the Crossover Point in Satisfiability Problems" James Crawford and Larry Auton, University of Oregon "Many Hard Examples for Resolution are Easy" J. Franco and R. Swaminathan, University of Cincinnati "Survey of Average Time SAT Performance" Paul Purdom, University of Indiana "Computational Experiments with an Exact SAT Solver" Alex Kogan, Endre Boros, and Peter L. Hammer, Rutgers University Contributed Papers: "Area Method and Automated Theorem Proving in Affine Geometries" Shang-Ching Chou, Xiao-Shan Gao, Jing-Zhong Zhang, The Wichita State University "An Unconstrained Optimization Algorithm for the Satisfiability Problem" Jun Gu, The University of Calgary "Polynomial-Time Stable Models" Carlo Zaniolo, Luigi Palopoli, University of California at Los Angeles "Automating Induction: Explicit vs. Inductionless" Deepak Kapur, Hantao Zhang, State University of New York at Albany "Combining Neural Networks: An Overview": Sherif Hashem, Bruce Schmeiser, Purdue University "Relative Correctness of Prolog Programs" Leon Sterling, Marc Kirschenbaum, and Ashish Jain, Case Western Reserve University "Domain-Specific Complexity Tradeoffs" Bart Selman, AT&T Bell Laboratories "Removable Arcs in Stepwise-Decomposable Decision Networks" (Nevin) Lianwen Zhang and David Poole, University of British Columbia "Intelligent Backtracking in CLP-R" Charles E. Hughes, Jennifer Burg, Sheau-Dong Lang, University of Central Florida "Wormholes in the Search Space" Jerry Waxman and Jacob Shapiro, The City University of New York "Interwoven Procedural and Declarative Programming" Ken McAloon and Carol Tretkoff, CUNY Graduate Center and Brooklyn College "Directional Resolution: The Davis-Putman Procedure Revisited" Rina Dechter, University of California at Los Angeles "A Mathematical Programming Framework For Uncertainty Logics" John Hooker and K. A. Anderson, Carnegie Mellon University "Hunting For Snakes Using The Genetic Algorithm" Walter D. Potter, R. W. Robinson, K, J. Kochut, J. A. Miller, and D. Z. Redys, The University of Georgia "Paraconsistent Circumscription" Zuoquan Lin, Shantou University "Logical Considerations on Default Semantics" Guo-Quiang Zhang and William C. Rounds, University of Michigan "Seminormal Stratified Default Theories" Pawel Cholewinski,University of Kentucky "Voronoi Diagram Approach For Structure Training of Neural Networks" N. K. Bose and A. K. Garga, The Pennsylvania State University "Learning-Theoretic Perspectives of Acceptable Numberings" Ganesh R. Baliga and Anil M. Shende, University of Delaware "Use of Presburger Formulae in Semantically Guided Theorem Proving" Heng Chu and David A. Plaisted, University of North Carolina "Function Discovery using Data Transformation" Thong H. Phan and Ian H. Witten, The University of Calgary "Polynomial Algorithms for Problems over D-Systems" Klaus Truemper, The University of Texas at Dallas "Ranges of Nonmonotonoc Modal Logics: Largest Logics" Grigori Schwarz, Stanford University "Bayesian Computations through Approximation Functions" Eugene Santos Jr., Air Force Institute of Technology "Learning in Relational Databases: A Rough Set Approach" Xiaohua Hu and Nick Cercone, University of Regina "Neural Nets and Graph Coloring" Kenneth J. Danhof, L. Clark, W. D. Wallis, Southern Illinois University at Carbondale "Subsumption in Constraint Query Languages Involving Disjunctions of Range Constraints" Aris Ouksel and A. Ghazal, The University of Illinois at Chicago "Reasoning on Knowledge in Symbolic Computing" Jacques Calmet, Karsten Homann, and Indra A. Tjandra, Universitat Karlsruhe "Intelligent Simulation About Dynamic Systems" Feng Zhao, The Ohio State University "On conditional rewrite systems with extra variables and deterministic logic programs" Jurgen Avenhaus and Carlos Loria-Saenz "Using strong cutting planes in constraint logic programming" Alexander Bockmayr "Program tactics and logic tactics" Fausto Giunchiglia and Paolo Traverso "Serializability of sets" Ugur Halici and Asuman Dogac "Synthesis of Induction Orderings for Existence Proofs" Dieter Hutter "A comparative study of open default theories" Michael Kaminski "Theorem proving by analogy" Erica Melis "Grobner bases for set constraints" Yosuke Sato "The Hopfield net loading problem is tractable" V. Chandru, V. Vinay "Computability in control" E. Martin, D. Luzeaux "The time complexity of backtracking" Martin Zahn Speakers are invited to submit final versions of their papers for possible publication in special volumes of the journal, "Annals of Mathematics and Artificial Intelligence," published by J.C. Baltzer Scientific Publishing Company, Wettsteinplatz 10, Basel CH-4058, Switzerland. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: schumacher@rz.uni-mannheim.de Newsgroups: comp.parallel,comp.sys.super,comp.arch Subject: Report on the KSR1 Organization: Rechenzentrum Uni-Mannheim (RUM) The report 'One Year KSR1 at the University of Mannheim - Results & Experiences' is available via anonymous ftp. Connect to ftp.uni-mannheim.de, cd info/rumdoc, get rum3593.ps or rum3593s.ps, which is 300dpi or 400dpi. -- Robert Schumacher Computing Center email: schumacher@rz.uni-mannheim.de University of Mannheim voice: ++49 621 292 5605 Postfach 10 34 62 fax: ++49 621 292 5012 L15, 16 D-68131 Mannheim Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: pleierc@Informatik.TU-Muenchen.DE (Christoph Pleier) Subject: looking for remote compilation tools Sender: news@Informatik.TU-Muenchen.DE (USENET Newssystem) Organization: Technische Universitaet Muenchen, Germany I am looking for tools to perform remote compilation in heterogeneous UNIX networks. Is there anybody who can give me a hint where to find such tools? Thanking you in anticipation, Christoph Pleier (pleierc@informatik.tu-muenchen.de) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: tossy@uranus.informatik.rwth-aachen.de (Frank Brockners) Subject: Wanted: Information on taskmigration Date: 23 Dec 1993 13:07:02 GMT Organization: Rechnerbetrieb Informatik - RWTH Aachen Does anyone have information (papers, tech-reports, notes) on process- / taskmigration (and its implementation) on MPP-Systems? I am working on taskmigration on a transputerbased architecture and looking for related research results. Thanks in advance Frank -- Frank Brockners Institute for Operating Systems email: ih206br@cluster.rz.rwth-aachen.de Aachen University of Technology or: brockners@lfbs.rwth-aachen.de Kopernikusstr. 16, 52062 Aachen Phone: +49 241 804 374 Germany Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: cel@theory.lcs.mit.edu (Charles E. Leiserson) Newsgroups: comp.parallel,comp.theory,comp.arch Subject: SPAA'94 call for papers -- deadline January 21, 1994 Date: 22 Dec 1993 15:24:11 GMT Organization: MIT Laboratory for Computer Science Nntp-Posting-Host: larry.lcs.mit.edu SPAA'94 CALL FOR PAPERS Sixth Annual ACM Symposium on PARALLEL ALGORITHMS AND ARCHITECTURES JUNE 27-29, 1994 Cape May, New Jersey The Sixth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA'94) will be held in Cape May, New Jersey, on June 27-29, 1994. It is sponsored by the ACM Special Interest Groups for Automata and Computability Theory (SIGACT) and Computer Architecture (SIGARCH) and organized in cooperation with the European Association for Theoretical Computer Science (EATCS). CONTRIBUTED PAPERS: Contributed papers are sought that present original, fundamental advances in parallel algorithms and architectures, whether analytical or experimental, theoretical or practical. A major goal of SPAA is to foster communication and cooperation among the diverse communities involved in parallel algorithms and architectures, including those involved in operating systems, languages, and applications. The Symposium especially encourages contributed papers that offer novel architectural mechanisms or conceptual advances in parallel architectures, algorithmic work that exploits or embodies architectural features of parallel machines, and software or applications that emphasize architectural or algorithmic ideas. VENDOR PRESENTATIONS: As in previous years, the Symposium will devote a subset of the presentations to technical material describing commercially available systems. Papers are solicited describing concepts, implementations or performance of commercially available parallel computers, routers, or software packages containing novel algorithms. Papers should not be sales literature, but rather research-quality descriptions of production or prototype systems. Papers that address the interaction between architecture and algorithms are especially encouraged. SUBMISSIONS: Authors are invited to send draft papers to: Charles E. Leiserson, SPAA'94 Program Chair MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 USA The deadline for submissions is JANUARY 21, 1994. Simultaneous submission of the same research to SPAA and to another conference with proceedings is not allowed. Inquiries should be addressed to Ms. Cheryl Patton (phone: 617-253-2322; fax: 617-253-0415; e-mail: cap@mit.edu). FORMAT FOR SUBMISSIONS: Authors should submit 15 double-sided copies of a draft paper. The cover page should include (1) title, (2) authors and affiliation, (3) e-mail address of the contact author, and (4) a brief abstract describing the work. If the paper is to be considered as a vendor presentation, the words ``Vendor Presentation'' should appear at the top of the cover page. A technical exposition should follow on subsequent pages, and should include a comparison with previous work. The technical exposition should be directed toward a specialist, but it should include an introduction understandable to a nonspecialist that describes the problem studied and the results achieved, focusing on the important ideas and their significance. The draft paper--excluding cover page, figures, and references--should not exceed 10 printed pages in 11-point type or larger. More details may be supplied in a clearly marked appendix which may be read at the discretion of the Program Committee. Any paper deviating significantly from these guidelines--or which is not received by the January 21, 1994 deadline--risks rejection without consideration of its merits. ACCEPTANCE: Authors will be notified of acceptance or rejection by a letter mailed by March 15, 1994. A final copy of each accepted paper, prepared according to ACM guidelines, must be received by the Program Chair by April 8, 1994. It is expected that every accepted paper will be presented at the Symposium, which features no parallel sessions. CONFERENCE CHAIR: Lawrence Snyder, U. Washington. LOCAL ARRANGEMENTS CHAIR: Satish Rao and Yu-dauh Lyuu, NEC Research Institute. PROGRAM COMMITTEE: Gianfranco Bilardi (U. Padova, Italy), Tom Blank (MasPar), Guy Blelloch (Carnegie Mellon), David Culler (U. California, Berkeley), Robert Cypher (IBM, Almaden), Steve Frank (Kendall Square Research), Torben Hagerup (Max Planck Institute, Germany), Charles E. Leiserson, Chairman (MIT), Trevor N. Mudge (U. Michigan, Ann Arbor), Cynthia A. Phillips (Sandia National Laboratories), Steve Oberlin (Cray Research), C. Gregory Plaxton (U. Texas, Austin), Rob Schreiber (RIACS). -- Cheers, Charles Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: langlais@ift.ulaval.ca (Pascal Langlais (Dupuis)) Subject: Conference papers needed Reply-To: langlais@ift.ulaval.ca Organization: Universite Laval, Dept. Informatique I am searching some Conference Papers : Edwards, J. "A parallel implementation of the Painter's algorithm for transputer networks". Applications of Transputers 3. Proceedings of the Third International Conference on Applications of Transputers, p.736-741, 1991. O. Friedler, M.R. Stytz, "Dynamic detection of hidden-surfaces using a MIMD multiprocessor". Proceedings of Third Annual IEEE Symposium on Computer-Based Medical Systems, p.44-51, 1990. M. Blonk, W.F. Bronsvoort, F. Bruggeman, L. de Vos, "A parallel system for CSG hidden-surface elimination". Parallel Processing. Proceedings of the IFIP WG 10.3 Working Conference, p. 139-152, 1988. If somebody have one of these or know where I can get them, please contact me !!! Thank you Pascal Langlais etudiant gradue departement d'informatique Universite Laval langlais@ift.ulaval.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: beth@osc.edu (Beth Johnston) Subject: OSC to buy CRAY T3D Date: 22 Dec 1993 14:05:44 -0500 Organization: The Ohio Supercomputer Center Summary: Ohio Supercomputer Center to buy MPP FOR RELEASE: DECEMBER 20, 1993 CONTACTS: OSC: Cheryl Johnson, 614-292-6067 Cray/Media: Steve Conway, 612-683-7133 Cray/Financial: Bill Gacki, 612-683-7372 OHIO SUPERCOMPUTER CENTER ORDERS CRAY T3D MASSIVELY PARALLEL PROCESSING SYSTEM COLUMBUS, Ohio, Dec. 20, 1993 -- The Ohio Supercomputer Center (OSC) and Cray Research, Inc. (NYSE: CYR) today announced an agreement under which OSC will acquire a 32-processor, "entry-level" version of the CRAY T3D massively parallel processing (MPP) system. The new CRAY system will fit well into OSCUs existing Y-MP8/864 and Y-MP-EL/332 computing environment. The agreement calls for OSC and Cray Research to use the new systems to collaborate on advanced research projects including medical imaging. Financial terms were not disclosed. Under the agreement, a 32-processor, air-cooled CRAY T3D system is scheduled to be installed at the OSC facility in Columbus in second-quarter 1994. The system will be closely coupled with a CRAY Y-MP2E parallel vector supercomputer system slated for installation at the same time, said OSC director Dr. Charles F. Bender. "This agreement will create a heterogeneous computing environment that combines the strengths of traditional parallel vector supercomputing with the new capabilities of MPP." According to Dr. Bender, the 32-processor system is the smallest version of the CRAY T3D product line, which is available in sizes up to 2048 processors. "This entry-level version will enable us to test the applicability of massively parallel processing to the important research projects of Ohio industry and higher education." he said. As an example, the primary goal of the medical imaging research project is to develop faster, more accurate methods for transferring and analyzing images gained from MRI (magnetic resonance imaging) and other digital medical imaging technologies. "We want to achieve real-time medical imaging, which could have very significant impact on diagnosis, surgery planning, and medical education," said Dr. Bender. The research collaboration calls for OSC to establish a multi-disciplinary team consisting of existing staff with expertise in systems programming, training, computational chemistry, computational fluid flow, and finite element analysis. Cray Research would provide training as well as staff to collaborate on the project, which has a three-year duration. "Over the years, OSC and CRI have had many successful joint research projects and we are pleased that OSC, which is already a Cray customer, has chosen to continue its relationship with Cray Research and our heterogeneous CRAY T3D system on this innovative research project," said Cray chairman and CEO John F. Carlson. "We will fully support the goals of this collaboration." OSC is a state-funded shared resource of high performance computing available to scientists and engineers, both academic and commercial. Since 1987, OSC has been committed to providing the latest computational tools and technologies to industry and higher education. Cray Research creates the most powerful, highest-quality computational tools for solving the worldUs most challenging scientific and industrial problems. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ferraro@ccsf.caltech.edu (Robert Ferraro) Subject: HPCC Grad Student Fellowship Date: 22 Dec 1993 21:50:36 GMT Organization: CCSF Caltech, Pasadena, CA Nntp-Posting-Host: sampson.ccsf.caltech.edu Keywords: HPCC, fellowship, graduate student, JPL, GSRP Announcement of a Graduate Student Fellowship Opportunity in High Performance Computing at the Jet Propulsion Laboratory In the 1994 academic year, at least one new NASA Graduate Student Researchers Program (GSRP) fellowship award will be granted through JPL as part of the Federal High Performance Computing and Communications (HPCC) Program. These GSRP fellowships are targeted to support doctoral students whose research programs would be enhanced by access to JPL HPCC facilities and interaction with JPL scientists and staff. Only full-time graduate students who are citizens of the US are eligible for awards, which carry a $16,000 student stipend, and up to $6,000 for tuition and travel expenses. Fellowship terms normally extend for a maximum of 3 years. The deadline for applications for 1994 GSRP fellowships is Feb. 1, 1994. As part of NASA's HPCC Earth and Space Science (ESS) project, JPL is conducting research in system software, user tools, and parallel computational methods for distributed memory MIMD architectures. Areas of particular interest include parallel programming paradigms, decomposition and dynamic load balancing methods, parallel visualization and analysis of massive data sets, methods for solving partial differential equations, and debugging/performance monitoring methodologies. This work is in support of ESS Grand Challenge science applications, which include multi-disciplinary modeling of Earth and space phenomena and analysis of data from remote sensing instruments. JPL also operates two of the ESS project computing testbeds: an Intel Paragon, and a Cray T3D. Graduate students who are doing (or anticipate doing) doctoral research in topics which may be relevant to JPL HPCC research interests are urged to contact Dr. Robert Ferraro at the Jet Propulsion Laboratory for more information. (Faculty advisors with potential candidates are also invited to inquire.) Send Email to: ferraro@zion.jpl.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Marie-Christine Sawley Subject: Job advertisement in Switzerland (computational physicist) Path: samac9.epfl.ch!sawley Organization: Service Informatique Central YOUNG COMPUTATIONAL PHYSICIST Swiss Federal Institute of Technology of Lausanne (EPFL) The EPFL has a vacancy within the framework of the Parallel Application Technology Program (PATP) established in collaboration with CRAY RESEARCH Inc. This program is aimed at the development of scientific applications for the massively parallel computer, the Cray T3D. We are presently looking for a young computational physicist (less than 30 years old, PhD or diploma, preferably from plasma physics) having a solid background in multidimensional scientific simulations on supercomputers, to join the Plasma Physics Team of EPFL (Centre de Recherches en Physique des Plasmas, CRPP). The applicant will be part of a multilingual research group developing various hyperfrequency-device and plasma-physics codes. He/she will be responsible of porting and exploiting two particle-in-cell (PIC) codes on the T3D for gyrotron simulations. The position requires a strong interest in learning to exploit the potential of the massively parallel computing technology for multidimensional kinetic plasma simulations, creativity in physics and good knowledge of numerical analysis. The position is available from 1 January 1994 for a period of 12 months, with two possible renewals. The level of appointment will depend on the qualifications of the applicant. Salary, based on Swiss government scales, is highly competitive. Applications, including a curriculum vitae, list of publications and the names of three references, should be sent before 31 January 1994 to Dr. Kurt Appert CRPP/EPFL Av. des Bains 21 CH-1007 Lausanne Switzerland For further information Tel: +41 21 693 34 53; Fax: +41 21 693 51 76; e-mail: appert@crpp.epfl.ch The Ecole Polytechnique Federale de Lausanne (EPFL), which is one of two national technical universities in Switzerland, is situated on the northern shore of Lake Geneva overlooking the Swiss Alps. The EPFL is comprised of 11 departments actively involved in both research and teaching with a total of 4000 students, 140 professors, 1400 research staff, as well as additional administrative and technical staff. Excellent computer resources are available, consisting of central supercomputers (presently a Cray Y-MP M94 and a Cray Y-MP EL file server), as well as the computational resources (e.g., high and low level workstations) of the various departments and institutes. Access is also available to supercomputers at other Swiss centres. The EPFL will install in April 1994 a Cray T3D system with 128 processors (to be upgraded to 256). The EPFL is the sole European PATP site; its activities will be coordinated with the three corresponding American programs at Pittsburgh Supercomputing Center, JPL/Caltech, and LLNL/LANL. ================================================================== Dr. Kurt Appert CRPP/EPFL Av. des Bains 21 CH-1007 Lausanne Switzerland Tel: +41 21 693 34 53; Fax: +41 21 693 51 76; e-mail: appert@crpp.epfl.ch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Dawn McNeill Subject: Summer Course: Design and Analysis of Distributed Protocols Organization: MIT-Summer Professional Programs Dates: July 25-29, 1994 Tuition: $2,100 Overview: This course will focus on the design and analysis of distributed message passing protocols, the key communication paradigm in computing environments ranging from tightly coupled multiprocessors to local and wide area networks. Although message passing systems have highly efficient hardware implementations, programming them is an intellectually challenging and often painfully difficult task. There is a large body of theoretical work directed at helping the programmer/designer understand how to do this. This theory identifies typical problems for solution in message passing systems, and shows how to design and analyze protocols that solve them. Often, this involves using message passing systems to emulate simpler communication paradigms. The theory also identifies the inherent costs of solutions, and even shows that certain problems cannot be solved at all. This course is intended to familiarize programmers and system engineers with this theory, during one intensive week, through a combination of topic lectures and related problem sessions. The following is an overview of the topics we will cover: Computation Models - We will develop your intuition and understanding of the basic distributed system models, including the Synchronous, Asynchronous, and Timing-Based models of computation. We will study the general Message Passing model, plus simpler paradigms such as Shared Memory and Atomic Objects. We will also explain the various assumptions regarding link and/or processor failures, and what their implications are for the solvability of various problems. Communication and Synchronization Problems - We will show how to solve or prove impossible problems such as Leader Election, Searching a Network, Broadcast-Convergecast, Finding Shortest Paths, Finding a Minimum Spanning Tree, Distributed Consensus, Byzantine Agreement, Transaction Commit, Mutual Exclusion, Dining Philosophers, Resource Allocation, Atomic Snapshot, Concurrent Timestamping, Register Implementation, Reliable Communication, Termination Detection, Deadlock Detection, and more. Network Transformations - We will study several of the most useful ways of transforming a message passing system into a high-level programming environment by using: Logical Clocks, State Machine Simulators, Shared Memory Emulators, and Synchronizers. Proof Methods - We will demonstrate some of the most powerful and practical proof techniques for reasoning about the correctness and performance of distributed protocols, including Invariant Assertions and Simulation Mappings. Expected Background The course is intended for programmers, system designers, system engineers, and other students with a good knowledge of practical computer systems. We will assume a typical college background in mathematics for computer science students. Course Format Each morning will be devoted to two lectures, separated by a coffee break. After lunch there will be a problem/discussion session where students will break up into small groups to discuss problems assigned by the instructors. After another coffee break, the afternoon will end with a third lecture. The third lecture will be shortened or omitted on Friday in favor of a general discussion of the applications of the course material to the students' work. On Monday night there will be a cocktail party/get-acquainted session. At this session, students will be invited to discuss the ways that distributed protocols are (or could be) used in their work. On Thursday night there will be a banquet at a local restaurant. Staff The course will be taught by Profs. Nancy Lynch and Nir Shavit. Prof. Lynch is the head of the Theory of Distributed Systems (TDS) group at MIT's Laboratory for Computer Science. Prof. Shavit is an Assistant Professor of Computer Science at Tel-Aviv University in Israel; until two years ago, he was a member of the TDS group. For further information or an application form, please contact our office at 617-253-2101, or e-mail us at summer-professional-programs@mit.edu. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cengelog@cambridge.dab.ge.com (Yilmaz Cengeloglu) Subject: FREE Blackboard, Dynamic Knowledge Exchange, Agents tool for CLIPS 5.1 Organization: Martin Marietta, Daytona Beach, Florida --------------------------------------------------------------------- FREE Blackboard, Dynamic Knowledge Exchange, Agents tool for CLIPS 5.1 ---------------------------------------------------------------------- DYNACLIPS (DYNAamic CLIPS Utilities) Version 1.0 is RELEASED. ***PLEASE LET ME KNOW, IF YOU ARE INTERESTED WITH ***FREE COPY OF THIS UTILITIES. I have already mailed copy of these utilities people who already requested. If I have forget anyone, please let me know. I have not get any response from people who received it therefore I do not know what they think about DYNACLIPS. This is the first version and I did not have environment to test it. Belive me, It was working very well. In order to make more generic, I had to remove several function of real DYNACLIPS that I have used for my thesis. This process might bring some problems. Source code is NOT avaiable. Please do not ask. I am only releasing libraries that you can link with CLIPS. You can use it as a BLACKBOARD ARCHITECTURE TOOL, it is a basic BBA contains Control, Blackboard, and Knowledge sources. I am NOT distributing CLIPS with this tool, therefore you need to get CLIPS yourself. Most important feature of this tool is that; rules and commands can be exchanged dynamicly. For instance, one agent in the framework can create a rule and send it to other agents. Other agents receives this rule and adds to its own knowledge automaticly. It is too easy to use, it is just a CLIPS with some additional funtions that is used for sending facts, rule commands among agents. It would be very useful for people doing research and need BBA tool which is in C/CLIPS. Dynamic Knowledge Exchange has several potential applications and this tool would be good for preparing prototypes. History : --------- This tool is a part of my thesis. I have used Blackboard Architecture as a base and design a Framework. In this Framework, each Intelligent agent is an CLIPS shell and runs as a different process on SunOS operating System. Agents uses blackboard to communicate with other Intelligent agents in the framework. Each intelligent Agent can send/receive facts, rules, commands. Rules and Facts are inserted/deleted dynamicly while agents are running. Knowledge can be transfered as a temporary any permanent basis. I have integrated this Framework with Air Traffic Control Simulator which I have written in C. One intelligent Agents runs for each plane in the ATC Simulator. Inteligent agents try to solve conflict using dynamic knowledge exchange. I have used ATC simulator to verify that if knowledge exchange among agents is working well. Therefore that does not means that knowledge exchange is a good solution for solving conflict in the airspace. This framework is a prototype. ATC simulator is belong to institute that I was working while I was doing my thesis, plase do not ask a copy. Yilmaz cengeloglu P.O. Box 10412 Daytona Beach, FL 32120-0412 cengelog@cambridge.dab.ge.com ****(Please use this address)**** yil@engr.ucf.edu 73313.775@compuserve.com DISCLAIMER : : ************************************************************ : I do not talk for Martin Marietta Corporation and This tool : is not related any job I do in the Martin Marietta. : ************************************************************ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: " (Zdenek Sekera)" Subject: Position offered at Cray Research Switzerland Reply-To: zs@cray.com Organization: Cray Research (Switzerland) S.A. SENIOR or POSTDOCTORAL COMPUTATIONAL PHYSICIST CRAY RESEARCH (Switzerland) CRAY RESEARCH (Switzerland) has a vacancy within the framework of the Parallel Application Technology Program (PATP) established in collaboration with the Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. This program is aimed at the development of scientific and technical applications for the massively parallel computer, the Cray T3D. We are presently looking for an experienced computational physicist having a solid background in multidimensional scientific simulation on supercomputers and some experience with parallel architectures and programming methodologies, to join the Plasma Physics Team of EPFL (Centre de Recherches en Physique des Plasmas, CRPP). The applicant will be part of the research group developing/porting various hyperfrequency- device and plasma-physics codes to the T3D. The position requires a strong interest in taking up the unique challenge of contributing to the expansion of parallel computing technology in plasma physics and fusion research, good communication skills and the ability to bring computer science, numerical analysis and physics together. An experience in particle simulation (for the solution of kinetic transport equations) would be highly desirable. Knowledge of Fortran is absolutely essential. The position is available from 1 January 1994 for a period of 18 months, with the possible extension of another 18 months. Applications, including a curriculum vitae, list of publications and references, should be sent before 31 January 1994 to: Cray Research (Switzerland) S.A. Route de Renens 1 CH-1030 Bussigny Switzerland attn. Z.Sekera For further information Tel: 41-21-702.25.00 or 41-21-693.22.00 Fax: 41-21-701.27.01 E-mail: zs@cray.com The Ecole Polytechnique Federale de Lausanne (EPFL), which is one of two national technical universities in Switzerland, is situated on the northern shore of Lake Geneva overlooking the Swiss Alps. The EPFL is comprised of 11 departments actively involved in both research and teaching with a total of 4000 students, 140 professors, 1400 research staff, as well as additional administrative and technical staff. Excellent computer resources are available, consisting of central supercomputers (presently a Cray Y-MP M94 and a Cray Y-MP EL file server), as well as other computational resources (e.g., high and low level workstations) of the various departments and institutes. Access is also available to supercomputers at other Swiss centres. The EPFL will install in April 1994 a Cray T3D system with 128 processors (to be upgraded to 256). The EPFL is the sole European PATP site; its activities will be coordinated with the three corresponding American programs at Pittsburgh Supercomputing Center, JPL/Caltech, and LLNL/LANL. ---Zdenek Sekera / Cray Research Switzerland Approved: parallel@hubcap.clemson.edu Newsgroups: comp.parallel From: fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) Subject: Seasons Greetings I am turning the reins over to a new group of moderators right after the first of the year. I want to take this opportunity to wish everyone piece and happiness in the coming year. I have enjoyed my tenure at the helm and I certainly will continue using comp.parallel as a mechanism to stay abreast. I will also continue to maintain parlib. In fact, I will WWW-ize parlib shortly. Again, thanks to all who have made this job easy over the years. Steve =========================== MODERATOR ============================== Steve Stevenson fpst@hubcap.clemson.edu Administrative address steve@hubcap.clemson.edu Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell Wanted: Sterbenz, P. Floating Point Computation Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ara@zurich.ai.mit.edu (Allan Adler) Subject: Re: APR News Release Organization: M.I.T. Artificial Intelligence Lab. References: <1993Dec22.131528.6943@hubcap.clemson.edu> Why not post advertising on Compuserve where it is acceptable? Allan Adler ara@altdorf.ai.mit.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sscott@mcs.kent.edu (Stephen Scott) Subject: Re: Job advertisement in Switzerland (computational physicist) Date: 23 Dec 1993 18:57:22 GMT Organization: Kent State University References: <1993Dec23.160027.12360@hubcap.clemson.edu> In article <1993Dec23.160027.12360@hubcap.clemson.edu>, Marie-Christine Sawley writes: |> |> YOUNG COMPUTATIONAL PHYSICIST |> |> Swiss Federal Institute of Technology of Lausanne (EPFL) |> |> |> The EPFL has a vacancy within the framework of the Parallel |> Application Technology Program (PATP) established in |> collaboration with CRAY RESEARCH Inc. This program is aimed at |> the development of scientific applications for the massively |> parallel computer, the Cray T3D. |> |> We are presently looking for a young computational physicist |> (less than 30 years old, PhD or diploma, preferably from plasma ^^^^^^^^^^^^^^^^^^^^^^ I wonder if an age specification like this is legal? Perhaps since this is posted for a position in Switzerland it is legal. However, if posted for a position in the USA I believe it would be (should be) considered discriminatory. I believe the ACM National Office has policies against this type of restriction in their position announcements. They want you to say things like "less than 5 years from finish of Ph.D". I don't want to cause any problems over this. I am just curious as I have recently noticed a few postings in various net groups with specific restrictions that I would consider somewhat discriminatory by USA standards. stephen. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - What Is It? Organization: NETCOM On-line Communication Services (408 241-9760 guest) The PARALLEL Processing Connection is an entrepreneurial association; we mean to assist our members in spawning very successful new businesses involving parallel processing. Our meetings take place on the second Monday of each month at 7:15 PM at Sun Microsystems at 901 South San Antonio Road in Palo Alto, California. Southbound travelers exit 101 at San Antonio ; northbound attendees also exit at San Antonio and take the overpass to the other side of 101. There is an $10 visitor fee for non- members and members ($50 per year) are admitted free. Our phone number is (408) 732-9869 for a recorded message about upcoming meetings; recordings are available for those who can't attend - please inquire. Since the PPC was formed in late 1989 many people have sampled it, found it to be very valuable, and even understand what we're up to. Nonetheless, certain questions persist. Now, as we approach our fifth year of operation, perhaps we can and should clarify some of the issues. For example: Q. What is PPC's raison d'etre? A. The PARALLEL Processing Connection is an entrepreneurial organization intent on facilitating the emergence of new businesses. PPC does not become an active member of any such new entities, ie: is not itself a profit center. Q. The issue of 'why' is perhaps the most perplexing. After all, a $50 annual membership fee is essentially free and how can anything be free in 1994? What's the payoff? For whom? A. That's actually the easiest question of all. Those of us who are active members hope to be a part of new companies that get spun off; the payoff is for all of us -- this is an easy win-win! Since nothing else exists to facilitate hands-on entrepreneurship, we decided to put it together ourselves. Q. How can PPC assist its members? A. PPC is a large technically credible organization. We have close to 100 paid members and a large group of less regular visitors; we mail to approximately 500 engineers and scientists (primarily in Silicon Valley). Major companies need to maintain visibility in the community and connection with it; that makes us an important conduit. PPC's strategy is to trade on that value by collaborating with important companies for the benefit of its members. Thus, as an organization, we have been able to obtain donated hardware, software, and training and we've put together a small development lab for hands-on use of members at our Sunnyvale office. Further, we've been able to negotiate discounts on seminars and hardware/software purchases by members. Most important, alliances such as we described give us an inside opportunity to JOINT VENTURE SITUATIONS. Q. As an attendee, what should I do to enhance my opportunities? A. Participate, participate, participate. Many important industry principals and capital people are in our audience looking for the 'movers'! For further information contact: -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: parallel@netcom.com (B. Mitchell Loebel) Subject: The PARALLEL Processing Connection - January Meeting Notice Organization: NETCOM On-line Communication Services (408 241-9760 guest) Date: Fri, 24 Dec 1993 01:48:48 GMT January 10th - Parallel Programming Tools - What Do We Have? - What Do We Need? - What's Coming? Providing programming tools for parallel machines is essential if the computing power of these machines is to be actualized for real world users. On January 10th, Dr. Doreen Cheng of NASA Ames will describe existing parallelization tools, libraries, debuggers, performance tuning tools, and network resource management tools. She plans to compare the tools in each category from the user's and designer's point of views. Finally, Doreen will point out the future research and development directions in each of the categories. A discussion of member entrepreneurial projects currently underway will begin at 7:15PM and the main meeting will start promptly at 7:45PM at Sun Microsystems at 901 San Antonio Road in Palo Alto. This is just off the southbound San Antonio exit of 101. Northbound travelers also exit at San Antonio and take the overpass to the other side of 101. Recordings are available for those who can't attend - please inquire. Please be prompt; as usual, we expect a large attendance; don't be left out or left standing. There is a $10 fee for non-members and members will be admitted free. -- B. Mitchell Loebel parallel@netcom.com Director - Strategic Alliances and Partnering 408 732-9869 PARALLEL Processing Connection Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: farhat@argos.enst.fr (Jocelyne Farhat) Subject: Threads' performance (Summary) Date: 24 Dec 1993 09:23:43 GMT Organization: Telecom Paris, France In return to my article posted on threads' performance: > I am looking for performance measurements of different thread > systems (for example those of the systems Mach, Chorus and Solaris, > the Mach C Threads and SUN LWP libraries, and the POSIX standard,etc...) > to complete my bibliographical work on the subject. > > Does anyone know or have any that have been already done? > > Merci beaucoup. > Jocelyne. > P.S. Please send replies to farhat@inf.enst.fr A few days ago, I posted a request for performance measurements on various kinds of thread systems. I would like to thank all the persons who responded to it. I must say that I have received few answers but a lot of requests for summary. Franckly, the requests were "much much" greater than the answers. In what follows I list the references that interested me most and that used in my work. I was disppointed to find out that little work has been published on thread performance. The measurements I collected are, of course, not homogeneous being done on a variety of processors using different benchs. _______________________________________________________________________________ [POW 91] M.L. Powell et al. "SunOS Multi-thread Architecture." USENIX Winter 1991, Dallas, Texas. [BER 88] B. Bershad et al. "PRESTO: A System for Object-oriented Parallel Programming". Software Practice and Experience, Vol 18(8), pp.713-732, August 1988. [AND 91] T.E. Anderson et al. "Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism". Proc 13th symp on O.S. Principles, ACM, pp. 95-109, 1991. [FAU 90] J.E.Faust and H.M.Levy "The Performance of an Object-Oriented Threads Package". Proceedings on Object-Oriented Programming: Systems, Languages, and Applications, Canada, Octobre 1990 [AND 89] T.E.Anderson et al. "The Performance and Implications of Thread Management Alternatives for Shared-Memory Multiprocessors". IEEE Tr on Computers, Vol. 38, No. 12, pp 1631-1644, December 1989 [MUE 93] F.Mueller "A Library Implementation of POSIX Threads Under UNIX Winter" USENIX, January 25-29 1993, San Diego,CA. [BER 92] B.N. Bershad, R.P. Draves and A. Forin. "Using Microbenchmarks to Evaluate System Performance". Proceedings of the Third Workshop on Workstation Operating Systems (WWOS-3), April 1992. [DRA 91] R.P Draves et al. "Using Continuations to Implement Thread. Management and Communication in O.S.", Proc 13th symp on O.S. Principles, ACM, P122-136,1991 [INO 91] S.Inohara and K.Kato and T.Masuda, "Thread Facility Based on User/Kernel Cooperation in the {XERO} Operating System" in Proceedings of the fifteenth IEEE International Computer Systems and Applications Conference (IEEE Computer Society), pages 398--405, September1991, [INO 93] S Inohara and K.Kato and T.Masuda "'Unstable Threads' Kernel Interface for Minimizing the Overhead of Thread Switching" in Proceedings of the 7th IEEE International Parallel Processing Symposium, pp 149--155, April 1993 _______________________________________________________________________________ Merci encore une fois. Jocelyne. ----------------------------------- Jocelyne Farhat Telecom Paris Departement Informatique Tel: 33-1-45-81-79-95 e-mail: farhat@inf.enst.fr ----------------------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dfk@dartmouth.edu (David Kotz) Subject: Re: Seasons Greetings Date: 23 Dec 1993 17:25:47 GMT Organization: Dartmouth College, Hanover, NH References: <1993Dec23.160058.12698@hubcap.clemson.edu> In article <1993Dec23.160058.12698@hubcap.clemson.edu> fpst@hubcap.clemson.edu (Steve Stevenson-Moderator) writes: I am turning the reins over to a new group of moderators right after the first of the year. I want to take this opportunity to wish everyone piece and happiness in the coming year. I have enjoyed my tenure at the helm and I certainly will continue using comp.parallel as a mechanism to stay abreast. I will also continue to maintain parlib. In fact, I will WWW-ize parlib shortly. Again, thanks to all who have made this job easy over the years. Steve Indeed, many thanks to Steve for his wonderful job as moderator! I think I speak for many in saying that we appreciate his efforts. dave -- ----------------- Mathematics and Computer Science Dartmouth College, 6211 Sudikoff Laboratory, Hanover NH 03755-3510 email: David.Kotz@Dartmouth.edu or dfk@cs.dartmouth.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.arch,comp.sys.m68k,comp.parallel From: jmc3@engr.engr.uark.edu (Dr. James M. Conrad) Subject: HELP: Paper references to global/local busses Keywords: 68040, write posting, bus arbitration Reply-To: jmc3@engr.engr.uark.edu Organization: University of Arkansas College of Engineering Several researchers at the University of Arkansas are designing and building a multiprocessor using the Motorola 68040 chip and Multichip Module (MCM) technology. The design uses dual-port SRAM and a global and local bus. Each SRAM bank is available locally to one processor on the local bus and globally to the other processors on the, well, global bus (I hate to appear redundant!!!). Those of you who know the 68040 know that it is not ideal for a MP application. We have our reasons for using it (which are beyond the question in this posting). One problem we had to resolve is that the 68040 requires control of the bus before it will write (or read) data. To implement the global/local bus, we created two locations a write can go: Straight to the local memory, or straight to a fifo buffer. The fifo buffer (and associated logic) then worry about abitration of the global bus. Reads are a bit more complicated. QUESTION: We know this technique has been used in the past (or we can swear we have seen it before), and the phrase "write posting" comes to mind. We cannot find any references in the standard textbooks/journals of this technique. So, if anyone out there can point us to specific references of this technique, please send me an email note (I may not get around to reading posts until after the holidays, so the post may already be cleaned off). If you are interested in learning about this work, our plan is available via anonymous ftp at engr.engr.uark.edu, directory/file /pub/cseg/uarkcsegtr1993-3.ps.Z (postscript, compressed). ------------------------------------------------------------------------- James M. Conrad, Assistant Professor jmc3@jconrad.engr.uark.edu Computer Systems Engineering Department jmc3@engr.engr.uark.edu University of Arkansas, 313 Engineering Hall, Fayetteville, AR 72701-1201 Dept: (501) 575-6036 Office: (501) 575-6039 FAX: (501) 575-5339 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rlr@acsu.buffalo.edu (Raymond L. Roskwitakski) Subject: Seeking Grad sch. advice Keywords: parallel programing, dynamics, Monte Carlo, quantum chemistry [See the Parlib listings How to Get Information from Parlib: The parlib login on hubcap.clemson.edu is a mail server using the netlib software. To get the instructions, send the following mail message: shell> mail parlib@hubcap.clemson.edu Subject: send index . shell> .... ] Greetings. Presently I have a masters in chemistry. I wish to do my PhD in parallel programing. I am looking for a proffessor from whom I can learn parallel simulations (dynamics, Monte Carlo) of large systems(proteins, DNA, in solvent). If anyone knows of such a proffesor, or a school which encourages interdisciplinary work let me know. Thanks in advance Raymond Roskwitalski rlr@autarch.acsu.buffalo.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mlevin@husc8.harvard.edu (Michael Levin) Subject: a question on C* Organization: Harvard University, Cambridge, Massachusetts I have a couple of questions on C* (which I am using on a Sun front end to access a CM-2): (I haven't found any of this in the meager documentation I've been able to obtain) 1. firstly, how do I use the rank() function to sort a parallel structure by one of its fields (an int)? 2. what is the proper syntax of the write_to_pvar function? 3. are things like bzero() overloaded to enable them to work on parallel variables? How come the following code (designed to test just that) gives me a core dump? Ifthey are not overloaded, then a) how does one know which ones (like +, sin(), etc.) are, and which are not? And also, what would be the right way of doing the equivalent operation? Here's the code: #include #include "cm/paris.h" #include "cm/cmtypes.h" #include shape [4096]par; struct moo { char a[100]; int b; }; main() { struct moo:par x; [0]x.a[5] = '8'; printf("%c \n",[0]x.a[5]); bzero(x.a,10); printf("Still here. \n"); printf("%c \n",[0]x.a[5]); } Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: vip@kvant.munic.msk.su (Andrei I. Massalovitch) Subject: Looking for Hecht-Nielsen Corp. e-mail address Reply-To: vip@kvant.munic.msk.su Organization: NII Kvant Summary: Looking for Hecht-Nielsen Corp. e-mail address. Dear Colleagues, please, do you know if the Hecht-Nielsen Neurocomputer Corp (HNC Inc) has an email address ? I've received yesterday their letter by regular post and wish to be able to get in touch with someone of them. As usual my fax is out of order so e-mail is the fastest and most reliable way. I also have the promising proposals for other NN-products suppliers. Thanks in advance ! Spasibo ! * * * * Andrei Massalovitch * Merry Christmas ! * Parallel Systems Division * * S&R Institute KVANT, Moscow * Happy New Year ! * E-mail : vip@kvant.munic.msk.su * * * * -- Andrei Massalovitch Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: sakumar@magnus.acs.ohio-state.edu (Sanjay Kumar) Newsgroups: comp.parallel,comp.parallel.pvm Subject: Need help: host/node programming on CM-5 using PVM Date: 27 Dec 1993 22:48:30 GMT Organization: The Ohio State University Hi Folks ! I am trying to move my workstaion code for a master-slave program to the CM-5. Since PVM now has been ported to CM-5 I had little problem in compiling or linking my code. However following problems exist: -It seems all slaves are getting spawned on the PM. I linked my program using same makefile as the workstation and didn't even use the cmmd-ld linker. When I ried to link withh cmmd-ld, it gave a bunch of errors namely _main multiply defined and some unresolved externals like CMM_enable. Could someone please enlighten me regarding use of this linker for PVM program? - Suppose some of the *.o files are common to both master and slave code, Do I need to comile them separately for master and slave programs ( one with -DCP_CODE switch) ? I would appreciate much any help recieved. Thanks a lot in advance. -Sanjay Kumar Graduate Student, Civil Engg. The Ohio State University. E-mail: skumar@cad1.eng.ohio-state.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: thiebaut@sophia.smith.edu (Dominique Thiebaut) Subject: *Wanted* Run Time on Data Parallel Machine Organization: Smith College, Northampton, MA, USA I am looking for an institution or company willing to give free access to its data parallel machine to run programs written in Parallaxis by students taking a course in parallel processing. The course will be taught in the Spring of 1994, at Smith College, and data parallelism will be one of three paradigms investigated. Approximately 12 students will be taking the course, and will need access to a data parallel machine for about one month. (If we cannot get access to a real machine, we will have to use simulators running on 486 PCs and Sparc workstations). If your institution, department, or company is interested in supporting women (Smith is an all-women college) in the Sciences by granting access to your computing facilities, please send me mail. I will be happy to give you more details and/or issue a more formal request. ============================================================================== Prof. Dominique Thiebaut thiebaut@sophia.smith.edu Dept. Computer Science (413) 253 5740 Smith College Northampton, MA 01063 ============================================================================== Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: callewis@netcom.com (David Scott Lewis) Subject: FREE E-Newsletter on Advances in Computing Organization: NETCOM On-line Communication Services (408 241-9760 guest) Date: Tue, 28 Dec 1993 03:51:23 GMT Apparently-To: comp-parallel@uunet.uu.net * * * P R E S S R E L E A S E * * * P R E S S R E L E A S E * * * B R I E F R E L E A S E FREE NEWSLETTER Free, electronic newsletter features article summaries on new generation computer and communications technologies from over 100 trade magazines and research journals; key U.S. & international daily newspapers, news weeklies, and business magazines; and, over 100 Internet mailing lists & USENET groups. Each monthly issue includes listings of forthcoming & recently published technical books and forthcoming shows & conferences. Bonus: Exclusive interviews with technology pioneers. E-mail subscription requests to: listserv@ucsd.edu (Leave the "Subject" line blank.) In the body of the message, type: SUBSCRIBE HOTT-LIST (do not include first or last names) * * * P R E S S R E L E A S E * * * P R E S S R E L E A S E * * * G E N E R A L R E L E A S E HOTT -- Hot Off The Tree -- is a FREE monthly electronic newsletter featuring the latest advances in computer, communications, and electronics technologies. Each issue provides article summaries on new & emerging technologies, including VR (virtual reality), neural networks, PDAs (personal digital assistants), GUIs (graphical user interfaces), intelligent agents, ubiquitous computing, genetic & evolutionary programming, wireless networks, smart cards, video phones, set-top boxes, nanotechnology, and massively parallel processing. Summaries are provided from the following sources: Wall Street Journal, New York Times, Los Angeles Times, Washington Post, San Jose Mercury News, Boston Globe, Financial Times (London) ... Time, Newsweek, U.S. News & World Report ... Business Week, Forbes, Fortune, The Economist (London), Nikkei Weekly (Tokyo), Asian Wall Street Journal (Hong Kong) ... over 50 trade magazines, including Computerworld, InfoWorld, Datamation, Computer Retail Week, Dr. Dobb's Journal, LAN Times, Communications Week, PC World, New Media, VAR Business, Midrange Systems, Byte ... over 50 research journals, including ** ALL ** publications of the IEEE Computer and Communications Societies, plus technical journals published by AT&T, IBM, Hewlett Packard, Fujitsu, Sharp, NTT, Siemens, Philips, GEC ... over 100 Internet mailing lists & USENET discussion groups ... plus ... listings of forthcoming & recently published technical books and forthcoming trade shows & technical conferences BONUS: Exclusive interviews with technology pioneers ... the next issue features an interview with Mark Weiser, head of Xerox PARC's Computer Science Lab TO REQUEST A FREE SUBSCRIPTION, CAREFULLY FOLLOW THE INSTRUCTIONS BELOW Send subscription requests to: listserv@ucsd.edu Leave the "Subject" line blank In the body of message input: SUBSCRIBE HOTT-LIST Note: Do *not* include first or last names following "SUBSCRIBE HOTT-LIST" The HOTT mailing list is automatically maintained by a computer located at the University of California at San Diego. The system automatically responds to the sender's return path. Hence, it is necessary to send subscription requests directly to the listserv at UCSD. (I cannot make modifications to the list ... nor do I have access to the list.) If you have problems and require human intervention, contact: hott@ucsd.edu The next issue of the revived HOTT e-newsletter is scheduled for transmission in late January/early February. Please forward this announcement to friends and colleagues, and post to your favorite bulletin boards. Our objective is to disseminate the highest quality and largest circulation compunications (computer & communications) industry newsletter. I look forward to serving you as HOTT's new editor. Thank you. -- *********************************************************************** * David Scott Lewis * * Editor-in-Chief and Book & Video Review Editor * * IEEE Engineering Management Review * * (the world's largest circulation "high tech" management journal) * * Internet address: d.s.lewis@ieee.org Tel: +1 714 662 7037 * * USPS mailing address: POB 18438 / IRVINE CA 92713-8438 USA * *********************************************************************** Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.parallel.pvm From: vchagant@uahcs2.cs.uah.edu (Venkata Chaganti) Subject: What is the equelent of csend()/intel to nwrite()/nCUBE Sender: vchagant@uahcs2.cs.uah.edu (Venkata Chaganti) Reply-To: vchagant@uahcs2.cs.uah.edu (Venkata Chaganti) Organization: Computer Science Dept., Univ. of Alabama-Huntsville Date: Tue, 28 Dec 93 17:41:22 GMT Apparently-To: comp-parallel@rutgers.edu Hi INTEL/nCUBE GURUS I was given a code written for intel machine and asked to port to nCUBE. I don't have intel manuals. Please, some one could help me how i can write the following commands in intel to nCUBE. msid = irecv(kxp,g,2*npr*kxp) if(...) csend(1+kxp,f,2*npr*kxp,kb-1,0) else csend(1+kxp,f,2*npr*kxp,kb+1,0) endif call msgwait(msid) Actually i would like to know the what each parameter means in csend(),irecv(),msgwait() what is the efficient way to use nread() and nwrite() in ncube to equalize it. Thanks kris chaganti chaganti@ebs330.eb.uah.edu or chaganti@uahcs2.cs.uah.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.sys.transputer,comp.parallel From: jap@minerva.inesc.pt (Joa~o Anto'nio Madeiras Pereira) Subject: TCP/IP under Parsytec Helios 1.2.1 (Rel. 910701) Sender: usenet@inesc.inesc.pt Nntp-Posting-Host: minerva-2.inesc.pt Organization: INESC - Inst. Eng. Sistemas e Computadores, LISBOA. PORTUGAL. Date: Tue, 28 Dec 1993 17:13:32 GMT Apparently-To: comp-parallel@inesc.inesc.pt Hi, netters, We are trying to install the Parsytec Helios Ethernet (Rel. 910111 Ver 1.2; MS-DOS systems) on a MultiCluster2-16 machine with a TPM-ETN board and running Parsytec Helios PC (Rel. 910701 Ver. 1.2.1). Unfortunately we don't have the gdi program, (it's said that gdi is distributed with the Rel. 910701) that generates the devinfo conf. file in order for the TCP/IP server select the ethernet device driver.We have the driver for TPM-ETN board and the devinfo.net file but not the gdi program. Can a good soul help us? Please ? Thanx in advance Joao Pereira Joao Antonio Madeiras Pereira + INESC - Instituto de Engenharia de Sistemas e Computadores + Rua Alves Redol, 9 -2o DTO + Apartado 10105 1017 Lisboa PORTUGAL + Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mirth@MCS.COM (Gavin S. Patton) Subject: Origami Port Date: 28 Dec 1993 17:06:51 -0600 Organization: Another MCSNet Subscriber, Chicago's First Public-Access Internet! X-Newsreader: TIN [version 1.2 PL2] [I have ported origami to x-windows as an experiment. You have to make the original think it's using Curses. Not hard. The alternative is to use the folding editor in gnu emacs. Steve ] Hi, all. I fell in love with the origami editor, and use it for the (somewhat base) purpose of editing my Foxpro application files. Works great! HOWEVER... I'm interested in porting it to Windows (gasp!) and increasing the file size it can handle. Does anyone have any suggestions (other than not to do it in Windows) or tips? Has this already been tried and either resulted in a non-viable program -or- an assassination attempt? I have the source for version 1.6 and access to several C compilers, including Visual C++, QuickC for Windows, and GNU G++. I know, this isn't very parallel, but I don't have the resources to play with that stuff yet. I post here because I found origami while looking through an ftp site catering to parallel processing. --------------------- Gavin Approved: parallel@hubcap.clemson.edu Path: bounce-back Newsgroups: comp.parallel From: rick@cs.arizona.edu (Rick Schlichting) Subject: Kahaner Report: Fujitsu's 2nd Parallel Computing WS (PCW'93) Followup-To: comp.research.japan Date: 28 Dec 1993 20:51:26 -0700 Organization: University of Arizona CS Department, Tucson AZ [Dr. David Kahaner is a numerical analyst on sabbatical to the Office of Naval Research-Asia (ONR Asia) in Tokyo from NIST. The following is the professional opinion of David Kahaner and in no way has the blessing of the US Government or any agency of it. All information is dated and of limited life time. This disclaimer should be noted on ANY attribution.] [Copies of previous reports written by Kahaner can be obtained using anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports.] From: Dr. David K. Kahaner US Office of Naval Research Asia (From outside US): 23-17, 7-chome, Roppongi, Minato-ku, Tokyo 106 Japan (From within US): Unit 45002, APO AP 96337-0007 Tel: +81 3 3401-8924, Fax: +81 3 3403-9670 Email: kahaner@cs.titech.ac.jp Re: Fujitsu's 2nd Parallel Computing WS (PCW'93) 11/93 Kawasaki Japan 12/28/93 (MM/DD/YY) This file is named "ap1000ws.93" ABSTRACT. Fujitsu's second Parallel Computing Workshop (PCW'93), held Nov 11 1993 at Fujitsu's Parallel Computing Research Facilities, Kawasaki Japan, is summarized. PCW is an opportunity for researchers who are using Fujitsu's AP1000 distributed memory parallel computer to describe their work. Significant collaborations are reported between the Australian National University and Fujitsu. I included a brief sketch of PCW'93 in a more general earlier report ("j-hpc.93", 8 Dec 1993). Some sections of that are reproduced here, along with additional details. By now, there are several Japanese commercial parallel computers, the two most notable being Fujitsu's AP 1000 and NEC's Cenju-3. Hitachi has announced plans to use HP's PA-RISC architecture to design and build a one thousand node MPP, and a somewhat more detailed product announcement has just occurred. The AP is a 2-D torus machine. Fujitsu is well ahead of other Japanese companies in providing access to its parallel computers. The company has set up a parallel computing research facility near Tokyo, with two 64cpu AP systems and one 1024 system (the maximum AP configuration). There is also another 64cpu system at Fujitsu's Makuhari facility. Associated with these labs is an extensive animation system, including digital VTR, HD-VTR, VHS-VTR, disk recorder, distributed disk video, etc. Fujitsu allows researchers worldwide to log in via the Internet and access the equipment for general science; at their recent parallel computing workshop (PCW'93) it was announced that there were over 600 registered uses of this facility (425 in Japan) including 13 groups from North America including Oregon State, Oregon Graduate Inst, U Utah, U New Hampshire, U California, U Iowa, U Illinois, New Jersey Inst of Tech, and others, as well as 11 groups from Europe including U Edinburgh, Imperial College, UMIST, Lund U, GMD, ETH, and European Molecular Biology Lab. Primary research topics are in computer science, but there is also significant work on computational physics, CFD, CAD, etc. There are also several APs in Japan outside Fujitsu, for example at Kyoto University. An AP has been at the Australian National University (ANU) for over two years, and extensive collaborative work is performed with Fujitsu. The company is also working toward an agreement for similar collaborations with Imperial College London. Scientists who are interested in access to one of the APs at Fujitsu's research facility, or have other technical questions, should send email to one of the addresses listed below. The AP is in its third or fourth generation and has evolved from a more specialized graphics engine called CAP. The current system has the capability to accept a vector processor for each cpu (Fujitsu's 50MHz Micro VP) on an internal bus (LBUS) as an option (called a Numeric Calculation Accelerator--NCA by Fujitsu) with 100MFLOP peak performance. (Some details of this are given in the paper by Kamimura--see abstract below, as well as a few LINPACK benchmarks associated with the NCA.) In addition there are definite plans, early in 1994, to upgrade the AP's basic cpu from a 25 MHz SPARC to a 50MHz Viking chip set, which will significantly improve the basic cell performance. According to researchers at PCW'93, integer performance, such as sorting, is already comparable to current generation CM-5's, although floating point is worse. Stanton from ANU reports that up to 85% of peak performance was obtained on sorting in molecular dynamics computations, and a comparable percentage of per cell peak was obtained in distributed BLAS applications. Several APs, which have a reputation of being exceptionally reliable, are likely to be sold to users at Japanese companies and university labs (with and without the new cpu) who are not concerned with its lack of commercial software. The collection of registered software (software developed by users and now distributed by Fujitsu) is still small but contains packages from ANU, University of Tokyo, Science University of Tokyo, and Fujitsu. The question that Fujitsu (among others) is studying is whether they should push forward with vector/parallel computing, massively parallel computing, or some combination of both. Fujitsu's official word on the AP is that "this machine is offered in selected markets to research centers," and also that the machine would be suitable as a network server. Fujitsu's PCW'93, which occurred in Nov 1993 at the company's Kawasaki facility suggested several things to me. (1) I believe that Fujitsu has really taken the lead in Japanese parallel computing; by opening up the AP to outside users the company has gained an enormous amount of experience as well as visibility for their product. Their workshop reported an impressive number of applications as well as the usual confidence building exercises. (2) The Fujitsu--ANU collaboration is a big win for both sides. ANU has become a player in parallel computing research; Fujitsu has obtained not only the general experience of this group and a window into Western thinking, but more explicitly, several very specific pieces of useful software (including an extended utilities kernel, a parallel file system, a nice text retrieval package, system, language and compiler tools, and a variety of math library software). Recently, several of Australia's best computer scientists, who have done excellent numerical work on the AP, are now turning their attention to the VPP500 (Fujitsu's parallel-vector system), so Fujitsu will derive some benefit there too. At the workshop, both senior company management and ANU attendees were very enthusiastic. (3) There is competition between VPP and AP groups inside Fujitsu, as these machines originate from different divisions. Based on the workshop I assume that the AP side is feeling very good now. A list of titles, authors, and abstracts for PCW'93 follows this report. My hosts for PCW'93 were Mr. Shigeru Sato Board Director Fujitsu Laboratories 1015 Kamikodanaka Nakahara-ku Kawasaki 211, Japan Tel: +81 44 777-1111, Fax: +81 44 754-2530 Dr. Mitsuo Ishii General Manager, Parallel Computing Research Center Fujitsu Laboratories 1015, Kamikodanaka Nakahara-ku, Kawasaki 211 Japan Tel: +81 44 777 1111; Fax: +81 44 754 2666 Email: MISHI@FLAB.FUJITSU.CO.JP and Mr. Takao Saito Section Manager, Parallel Computing Research Center (same address & fax), Tel: +81 44 754 2670 Email: SAITO@FLAB.FUJITSU.CO.JP The format for PCW'93 was a morning of talks, in Japanese except for one invited speaker (see below), and two afternoon poster sessions totalling about 40 papers from research staff in seven countries. Nearly 200 scientists participated, showing that there is a very high level of interest in this technology. The posters were given mostly in English and Fujitsu has prepared an English Proceedings containing all the papers; a complete copy of this may be obtained by contacting Dr Ishii above. Fujitsu also pointed out that the AP has appeared in various international journals, including J Nuclear Science & Tech, Computational Fluid Dynamics J, Nuclear Physics, and the Journal of the Information Processing Society of Japan. Research using the AP has been reported at a dozen international conferences, as well as about 80 Japanese workshops and conferences. At the morning session the invited speaker was Prof John Darlington Director of Research, Department of Computing Imperial College of Science, Technology & Medicine 180 Queen's Gate London SW7 2BZ UK Tel: +44 71 589 5111 x 5059; Fax: +44 71 581 8024 Email: JC@DOC.IC.AC.UK His talk, titled "Parallel Computing in Europe, Trends and Speculations" focused on two questions, (1) What will be the characteristics of future applications for parallel machines, and (2) What technologies should we be working on to expedite these applications? He concentrated on three main issues as follows. Political Growth in support for high performance computing Application and use (industrial commercial, social) Away from grand challenges Machines for commercial/business applications (e.g. database) Applications are compositions of heterogeneous components involving intensive computation (calculation & inference), data retrieval, and visualization. Darlington concluded his talk with the following observations. (1) (Parallel) programming languages are too low level vehicles to create and adapt complex applications. (2) We need to create adaptable/composable components and systems with which to build applications out of these components. (3) Application construction should then be the task of domain specialists. (4) We should build (parallel) application generators, and hopes for the emergence of a "parallel application machine tool industry". REMARK: Fujitsu views their Parallel Computing Research Facility as a mechanism connecting the university research community with world markets. Making advanced computing technology available to the research community has worked well for US vendors in the sense of getting their products exposed and obtaining important feedback. There is no reason to think that it will not be equally successful for Fujitsu. ---------------------------------------------------------------------- ABSTRACTS OF PAPERS PRESENTED AT FUJITSU'S 2ND PARALLEL COMPUTING WORKSHOP 11 NOV 1993, KAWASAKI JAPAN Simulated Annealing Heuristics for Static Task Assignment Kazuo Horikawa et al. Department of Information Science Faculty of Engineering, Kyoto University As an approach to the static task assignment problem for parallel applications, we propose a modification of simulated annealing. In order to speed up the convergence of this method, we exploit heuristics to calculate the generation probabilities of neighboring solutions. We have proposed two heuristics for load balancing and one heuristics for reduction of interprocessor communication. And we examined their effect using some random TIGs. The results proved that the former heuristics are very effective at the beginning of the annealing process, while the latter improves the score after certain feasible load balance was achieved by the former heuristics. Parallel execution of functional programs on loosely coupled multiprocessor systems ------------------------------------------------------------- Tetsurou Tanaka et al. (No address) It has been suggested that functional programs are suitable for programming parallel computers owing to their inherent parallelism. We propose a parallel evaluation model of functional programs based on the STG (Spineless Tagless G-machine) model proposed for sequential evaluation, and describe our parallel implementation of a functional language Gofer on the AP1000 parallel computer. Porting the PVM Distributed Computing Environment to the Fujitsu AP1000 ------------------------------------------------------------- C. W. Johnson et al. Department of Computer Science Australian National University The PVM system (Portable Virtual Machine) is a programming environment for distributed parallel C and Fortran-based programs. Originally for distributed heterogeneous workstations, it has recently been ported to multiprocessors such as the Intel iPSC/2 and Thinking Machines Corporation's CM5. We have ported the PVM system to the Fujitsu AP1000. We describe the process and communications model used for PVM on the AP1000 and consider further work to improve functionality and performance, including changes mooted to the Cell operating system and the structure of the host controlling processes.Stride Collective Communication for the Fujitsu AP1000 ------------------------------------------------------------- Gavin Michael et al. Department of Computer Science The Australian National University In order to provide mechanisms for efficient distribution and retrieval of global data arrays is has been necessary to develop the new collective communication primitives distribute and concatenate. These primitives are referred to as stride collective communication primitives and are similar to the scatter and gather operations on the AP1000. However, the stride collective communication primitives operate on both B-Net and the T-Net, and are not restricted to two dimensions. The new collective communication primitives exploit the stride DMA capability of the message controller on the AP1000. These primitives are used to support the distribute directive as defined in the High Performance Fortran language specification. They have also been used to support the automatic distribution and retrieval of global data arrays. The implementation of these communication primitives required minimal changes to the AP1000 kernel. These primitives are available in the AP1000 kernel developed at the Australian National University sys.anu. ------------------------------------------------------------- Performance Measurement of the Acacia Parallel File System for the AP1000 Multicomputer Bradley M. Broom Department of Computer Science The Australian National University Acacia is a parallel file system being developed at the Australian National University for the AP1000 multi-computer. Acacia automatically distributes data across multiple disks attached to the AP1000 and caches recently used data, although the user can explicitly control the way file data is accessed and distributed. Unlike systems in which the I/O nodes and compute nodes are disjoint. Acacia uses option disks attached directly to a subset of the compute nodes within the AP1000. Processing of I/O requests is divided between the node that initiates the request, the node on which the data is potentially cached, and the node on which it is read from or stored to disk. In the current version of the disk option software, disk accesses are synchronous precluding the use of prefetching or write behind. This paper presents performance measurements of the Acacia File System running on the 128 node/32 option AP1000 installed at the Australian National University. The study includes measurements of read/write rates from one compute node to one disk, one compute node to multiple disks, and multiple compute nodes to multiple disks. Additional measurements show the effects of different buffer sizes and caching on system performance. ------------------------------------------------------------- An Image Fileserver for the AP1000 Kazuichi Ooe et al. Fujitsu Laboratories Ltd. E-mail: ooe@flab.fujitsu.co.jp We developed a high-performance image fileserver for the AP1000. We analyzed its performance, and confirmed that its speed is 90% of ideal. ------------------------------------------------------------- A Reconfigurable Torus Network Kenichi Hayashi et al. Fujitsu Laboratories Ltd. 1015 Kamikodanaka, Nakahara-ku Kawasaki 211, Japan Independent subgroups of hypercubic or mesh distributed memory parallel processor (DMPP) may be accessed simultaneously by multiple users. Partitioning of a torus network, however, is complicated by wraparound paths. We present a novel architecture for a dynamically reconfigurable torus where design geometrically folds the torus and intersperses switches between designated partitions. The design avoids excess wiring and complex switches, and allows subgroups of processors to be accessed independently by different users. The use of network switches in the reconfigurable torus enables efficient global reduction and broadcasts. ------------------------------------------------------------- Data distributed volume rendering on the Fujitsu AP1000 Raju Karia Department of Computer Science Australian National University A scheme for the visualization of large data volumes using volume rendering on distributed memory MIMD system is described. The data to be rendered is decomposed into subvolumes to reside in the local memories of the system's nodes. A partial image of local data is generated at each node by ray tracing, and is then composited with partial images on other nodes in the correct order to generate the complete picture. Subvolumes whose voxels are classified as being mapped to zero opacity are not rendered, providing scope for improvement in rendering throughput. This optimization gives rise to load imbalance amongst nodes. Scattered decomposition is used for load balancing, involving an increase in the number of subvolumes, but also creating additional overheads due to increases in the number of intersections and the number of partial images to be composite. Hence, such decomposition is only effective up to a limit, beyond which, the costs in overheads outweigh any potential improvement in throughput. We demonstrate the effect of scattered decomposition on the performance of the data distributed volume renderer with some experimental results. ------------------------------------------------------------- Parallel FEM Solution Based on Substructure Method Hideo Fukumori et al. School of Science and Engineering Waseda University, Tokyo Japan In this paper, we present an implementation of substructure method on the Fujitsu AP1000. Substructure Method is one of the methods that has been used for the solution of finite element problems and it is applicable for parallel computing. Substructure Method decomposes the domain of a finite element problem into multiple blocks. First, the solutions for the reduced structure, that consists of boundary nodes of the blocks, are calculated. Then solutions for the nodes within each block are obtained. The calculation stage of internal nodes can be done independently. And assigning the decomposed blocks to different processors, high parallel efficiency can be achieved in this stage, because the calculation in this stage can be done in parallel without any need of communication. Compared to other methods based on spatial decomposition, substructure method allows us more flexible work load distribution to the processors. We implemented finite element finite element method program based on substructure method. On two-dimensional Poisson differential equation problem with 4096 nodes, speedup ratio of 27.9 on 64 processors was obtained. Linear Algebra Research on the AP1000 ------------------------------------------------------------- R. P. Brent et al. Australian National University This paper gives a report on various results on the Linear Algebra Project on the Fujitsu AP1000 in 1993. These include the general implementation of Distributed BLAS Level 3 subroutines (for the scattered storage scheme). The performance and user interface issues of the implementation will be discussed. Implementations of distributed BLAS-based LU Decomposition, Cholesky Factorization and Star Product algorithms will be described. The porting of the Basic Fourier Functions, from the Fujitsu-ANU Area-4 Project, on the AP1000 is also discussed. While the parallelization of the main FFT algorithm only involves communication on a single "transposition" step, several optimizations including fast roots of unity calculation, are required for its efficient implementation. Several more optimizations on the Hestenes Singular Value Decomposition algorithm have been investigated, including a BLAS Level 3 like kernel for the main computation and partitioning strategies. A study on how all optimizations affect convergence will also be discussed. Finally, work on implementing QR Factorization on the AP1000 will be discussed, where the Householder QR was found to be more efficient than the Givens QR. ------------------------------------------------------------- Progress Report on the Study of the Effect of Presence of Grain- Boundaries on Martensitic Transformation Tetsuro Suzuki Institute of Applied Physics University of Tsukuba, Tsukuba Japan The purposes of the present author to use AP1000 are twofold. The first is to take advantage of the parallel architecture to study the effect of the presence of grain-boundaries on martensitic transformation. The second is to study about education from the standpoint inaccessible without experiencing various difficulties encountered using AP1000. ------------------------------------------------------------- Parallelization of 3-D Radiative Heat Transfer Analysis in Nongray Gas H. Taniguchi et al. Department of Mechanical Engineering Hokkaido University, Sapporo Japan Parallel computational methods are developed for the Monte Carlo analysis of radiative heat transfer in three-dimensional nongray gas enclosed by gray walls. The highly parallel computer AP1000 is used for the analysis. For the nongray gas, a mixture of water vapor and carbon dioxide is chosen as the absorbing- emitting medium. Three types of parallelization are studied: type(1), event parallelization; type(2), combination of event parallelization and algorithm parallelization, and type(3), memory-saving algorithm of type(2). When 512 cells are used, values of the speed-up ratio of 395, 430 and 370 are obtained for types (1), (2) and (3), respectively. The maximum number of elements which can be treated is increased from 1,200 for type(1) to 140,000 for type(3). ------------------------------------------------------------- Probabilistic Fracture Mechanics Analysis on Massively Parallel Computer AP1000 Shinobu Yoshimura et al. University of Tokyo This paper describes a probabilistic fracture mechanics (PFM) computer program based on the parallel Monte Carlo (MC) algorithm. In the stratified MC algorithm, a sampling space of probabilistic variables such as fracture toughness value, the depth and aspect ratio of an initial semi-elliptical surface crack is divided into a number of small cells. Fatigue crack growth simulations and failure judgments of those samples are performed cell by cell in parallel. Coalescence of multiple cracks during the fatigue crack growth is considered in the analyses. The developed PFM program is implemented on a massively parallel computer composed of 512 processors, combined with a function of dynamic workload balancing. As an example, some life extension simulations of aged RPV material are performed, taking analysis conditions of normal and upset operations of PWRs. The results who that cumulative breakage probabilities of the analyzed model are of an order of 10^{-7} (1/crack) and that parallel performance always exceeds 90%. It is also demonstrated that degradation of fracture toughness values due to neutron irradiation and probabilistic variation of fracture toughness values significantly influence failure probabilities. ------------------------------------------------------------- Domain decomposition program for polymer systems David Brown et al. Chemistry Department University of Manchester Institute of Science and Technology Manchester, M60 1QD United Kingdom A domain decomposition program capable of handling systems containing rigid bond constrains and three- and four-body potentials as well as non-bonded potentials has been successfully implemented. This program has been thoroughly tested against results obtained using scalar codes and benchmarking is in progress together with a production simulation on the 1024 PE AP1000. Two parallel versions of the SHAKE algorithm, which solves the bond length constraints problem, have been compared in an attempt to optimize this procedure. ------------------------------------------------------------- A new technique to improve parallel automated single layer wire routing Hesham Keshk et al. Department of Information Science Kyoto University Yoshida-hon-machi Sakyo-ku, Kyoto 606-01 Automated wire routing is to find a path between two or more network pins and this path must not intersect with previous drawn paths. The basic problems of automated wire routing are the long computation time and large memory size required. Recently, several researches tried to speed up the routing problem by using parallel computers. We develop two parallel algorithms based on maze running algorithm. Both of them have a new technique in dividing a single layer grid. These two algorithms give high speed specially if the net lengths are small with respect to gird dimensions. In the first algorithm, moving boundaries are used in dividing the grid. The second algorithm can be divided into two phases. In the first phase, we rotate the areas assigned to the processors to route all short nets. In the second phase we use the competing processors algorithm to route the remaining nets. ------------------------------------------------------------- Towards a Practical Information Retrieval System for the Fujitsu AP1000 David Hawking et al. Department of Computer Science Australian National University This paper reports progress in the development of free text retrieval systems for the Fujitsu AP1000. Work is now focussed on the classical information retrieval problem, that of retrieving documents or articles relevant to a user's query. The current version of ftr permits use of the AP1000's DDV options for storing text bases, resulting in significant decreases in loading times. A new graphical user interface (called retrieve and based on tcl) which provides a user-friendly mechanism for invoking the ftr system from remote workstations, specifying and carrying out searches, and selecting, retrieving, viewing, and storing whole entries from the text base being searched. A range of useful tools is being developed for text base administration purposes. Initial performance results are presented, and likely future directions for the work are outlined. ------------------------------------------------------------- Parallel Computer System for Visual Recognition H. Takahashi (no address) We have developed a parallel computer system for visual recognition by Transputers. We also have implemented a neural network model of visual recognition "Neocognitron" on it. However running time didn't decrease in inverse proportion to the number of processors. From a result of analysis, we found the communication time between processors obstructed decreasing of the execution time. In this paper, we propose and examine a prediction method of the execution time of the Neocognitron application, when the communication and calculation ratio changed. With these results, we want to propose a design guide for dedicated parallel computer system architecture and network for a practical image recognition and understanding. ------------------------------------------------------------- Futurespace: a coherent, cached, shared abstract data type Paul H. J. Kelly et al. Department of Computing Imperial College 180 Queen's Gate, London SW7 2BZ United Kingdom (no abstract) ------------------------------------------------------------- Design of a Programming Language for Set-Oriented Massively Parallel Processing Hirofumi Amano et al. Department of Computer Science and Communication Engineering Kyushu University The design of a new massively parallel programming language based on the object-parallel approach is reported. The language gives a more natural description to a certain type of applications where one cannot fix the number and the topology of the objects involved. This paper presents shortcomings of data-parallel languages, and gives an overview of the new massively parallel programming language now under development. ------------------------------------------------------------- Implementation of UtiLisp/C on AP1000 Eiiti Wada et al. University of Tokyo Implementation of UtiLisp/C, which is a newly coded version of UtiLisp for Sparc workstations, on AP1000 has been attempted. The goal is to reform UtiLisp for AP1000 with a very few C coded lines modified, and a small number of special functions added, for distributed memory machines so that the basic feeling of parallel programming will be obtained in a relatively short time. No new fashionable functions often implemented in parallel Lisps, e.g., future and so forth are not yet included; only a set of inter call streams mechanisms was added to execute top level eval loops in parallel. The set of the original UtiLisp functions proved well suited for use on MIMD machine. ------------------------------------------------------------- An implementation and evaluation of a VPP-Fortran compiler for AP1000 Tsunehisa Doi et al. Fujitsu Laboratories Ltd. VPP Fortran, a parallel programming language that makes global address space accessible to the programmer, was originally developed to run on the Fujitsu VPP500 supercomputer. We developed a VPP Fortran processor that works with the AP1000 distributed-memory parallel computer. Called VPP-Fortran/AP, the processor uses a new mode of data communication, called direct remote data access (DRDA), to expedite indirect access via index arrays. This paper discusses possible approaches to implementing DRDA and presents results from experiments run on the AP1000. ------------------------------------------------------------- Implementing the PVM (Parallel Virtual Machine) on the AP1000 Shigenobu Iwashita et al. Department of Information Systems Interdisciplinary Graduate School of Engineering Sciences Kyushu University Kasuga, Fukuoka 816 Japan (no abstract) ------------------------------------------------------------- Improvement of Communication Performance by Reduction of Traffic Conflict in a Parallel Computer Hiroyuki Kanazawa et al. Fujitsu Ltd. Communication within a parallel computer shows as the number of tasks communicating with each other increases, because of traffic conflict in the processor array. We improved performance by reducing traffic conflict by optimizing task allocation to processors. We applied combinations of min-cut methods and partial enumeration methods to allocate tasks to processors at the application software level using the Fujitsu AP1000 parallel computer. A simulation of our new method showed that it has better optimization speed and quality than simulated annealing for a problem size of 256 cells and 256 tasks. With genetic algorithms, we should be able to find better and faster solutions to more complex problems. ------------------------------------------------------------- Using the Bulk-Synchronous Parallel Model with Randomized Shared Memory for Graceful Degradation Andreas Savva et al. Faculty of Engineering Tokyo Institute of Technology 2-12-1 Ookayama Meguro-ku, Tokyo 152 VSPM, a proposed bridging model for parallel computation, with RSM provide an asymptotically optimal emulation of the Parallel Random Access Machine (PRAM). This emulation does not depend on a network topology, but on network throughput. By assuming that processor failures do not significantly affect network throughput, we show that it is possible to efficiently recover from single processor failures and reconfigure a new system that satisfies the same properties as the original one. Results from an implementation on a Massively Parallel Processor showing the effect of processor number and global memory size on the reconfiguration time are presented. ------------------------------------------------------------- The LINPACK benchmark on the AP1000 with Numerical Computation Accelerators Kazuto Kamimura Fujitsu Laboratories, Ltd. 1015 Kamikodanaka Nakahara-ku, Kawasaki 211 This paper evaluates the performance of the Numerical Computation Accelerator (NCA) based on the LINPACK benchmark. The NCA was developed for the AP1000 to improve its computational performance. We vectorized several kernels within the Basic Linear Algebra Subprogram (BLAS), which are utilized in the LINPACK benchmark. We obtained 51% of the theoretical peak for single-cell execution, and 28-34% for multiple-cell execution. A 16-cell configuration with NCA can sustain a performance which is comparable to a 128-cell configuration without NCA for a 1000 x 1000 matrix. ------------------------------------------------------------- Parallel Processing of Logic Functions Based on Binary Decision Diagrams Shinji Kimura et al. Graduate School of Information Science Nara Institute of Science and Technology 8916-5 Takayama Ikoma, Nara 630-01 Japan The paper describes parallel algorithms for the binary decision diagram manipulation on a distributed memory multi-processor system Fujitsu AP1000. A binary decision diagram is a directed acyclic graph representing a logic function. We have proposed a Shannon expansion method, a modified Shannon expansion method and an output separation method for load averaging. We have obtained 120 times speed-up for some good examples using 512 processors, and 10-27 times speed-up for well used benchmark data using 64 processors. We also show the experimental results on the emulation of the global shared main memory on AP1000. That gives us 16 giga byte main memory with about 10 times speed-down. ------------------------------------------------------------- Neural Network-Based Direct FEM on a Massively Parallel Computer AP1000 Genki Yagawa et al. Faculty of Engineering University of Tokyo This paper describes a new finite element method suitable for parallel computing that is based on interconnected neural networks (NNs). The method consists of three phases: (i) generating a network, i.e. a finite element mesh, over an analysis domain, (ii) replacing the functional of FEM with the network energy of interconnected NNs and (iii) minimizing the network energy by changing the states of arbitrary units on the basis of a transition rule. In order to demonstrate the fundamental performance of the methods, a thermal conduction problem is analyzed using 16/64 processors of the massive parallel computer AP1000. The result shows that the solution obtained agrees well with the solution obtained by the conventional FEM. ------------------------------------------------------------- Error Analysis of Parallel Computation: Direct Solution Method of Linear Equations Syunsuke Baba et al. Department of communications and Systems Engineering University of Electro-Communications This manuscript shows the error analysis of parallel algorithm. We developed the sparse linear problem solver, named Multi-Step Diakoptics (MSD). The solver is applicable to the finite difference models of partial differential equations. One and two dimensional models were solved by MSD and the error analyses were done. From the results, we obtained the slight dependency of error on processor numbers in case of ill-conditioned problems. ------------------------------------------------------------- The Effect of Parallel Processing and Hill-Climbing Methods on Genetic Algorithms Hajime Ohi et al. Fujitsu Research Institute for Advanced Information System & Economics (no abstract) ------------------------------------------------------------- Dynamic balancing of computational loads of multiprocessor system in the DSMC simulation Mitsuo Yokokawa et al. Japan Atomic Energy Research Institute Tokai-mura, Naka-gun Ibaraki 319-11 The Direct Simulation Monte Carlo (DSMC) method is a numerical technique for simulating wide varieties of flows in the rarefied as well as in the continuum region. A large number of simulated particles are used in the simulation of continuum flows and, therefore, a large amount of computational time is required. The parallel computation of the DSMC method is one of the solutions to reduce the computational time. In the parallel implementation of the DSMC method, the computational region is divided into subregions and the computation in each subregion is assigned to each processor. In this paper, an attempt of taking the dynamic balance of the computational loads of each processor is presented. ------------------------------------------------------------- Monte Carlo Shielding Calculation without Variance Reduction Techniques Makoto Takano et al. Fuel Cycle Safety Div. Japan Atomic Every Research Institute In order to analyze radiation shielding problems, Monte Carlo method with continuous energy cross sections has the potential to supply the most accurate result. However, the conventional computers do not have enough power to solve the problem without various variance reduction techniques that needs user's experience. In the paper, the trial has been made to solve the shielding problems by pure analog Monte Carlo with the help of AP1000. ------------------------------------------------------------- Parallel computing of Si-slab energy-band Itsuo Umebu Research Center for Computational Science Fujitsu Limited With interests for high-speed ab-initio calculation of electronic states of semiconductors by parallel processing and for quantum- size effects in low dimensional Si crystal, we have calculated the energy bands of Si thin slabs by the pseudopotential method using parallel processor Fujitsu AP1000. The performance of parallel processing was better for thicker slabs: speed-up ratio of 35 was attained for 16-layer slabs at the ratio of the number of processor elements 432/6 (=72) but 15 at the same condition for 6-layer slabs. A saturation tendency in the performance with number of processor elements was observed and its cause was attributed not to the communication time between processor elements but to the unparallelized part of the computer program. The time for communication between processor elements was estimated from the quantity of transmission data and the number of calls for communication subroutines and it showed good agreement with measured time. ------------------------------------------------------------- Direct simulation of Homogeneous Turbulence by Parallel Computation Kazuharu Ikai et al. Science University of Tokyo The parallel computation is attempted for the DNS (Direct Numerical Simulation) of turbulence with use of the spectral method. The homogeneous isotropic turbulence is calculated using up to 1024 processors with the efficiency of over 80%. A special attention is required to transferring the data among processors for the efficient parallel fast Fourier transform. ------------------------------------------------------------- The Implementation and Evaluation of Large Scale Neural Network "CombNET-II" on AP1000 Shinji Kuno et al. Dept. of Electrical and Computer Eng. Nagoya Institute of Technology Gokiso-cho, Showa-ku Nagoya 466 Japan CombNET-II is a large scale neural network model which has comb structure. This network has excellent performance for Kanji character recognition or voice recognition. However this network costs large computation and lots of memories for learning. It takes long times for learning when using a conventional engineering workstation. We implemented CombNET-II on AP1000 as parallel as possible and got high performances. The learning speed on AP1000 is about 4 times as fast as SPARC Station-10. ------------------------------------------------------------- Toward Parallel Simulation of Ecology System Tomomi Takashina et al. Department of Communications and Systems Engineering University of Electro-Communications This manuscript describes the analysis of fitting the result of parallel simulation of ecology system to the Lotka-Volterra model. The simulator was developed on CAP-C3^1 at our laboratory. Rules of local interactions were modeled in order to demonstrate the global phenomenon of predator-prey interactions. Because of the restriction of cell numbers, each cell occupies one of the 64 sub-areas and solves the problem sequentially. We are now on the way to developing complete parallel simulation of ecology system on AP1000. ------------------------------------------------------------- Parallel Speculative Computation of Simulated Annealing on the AP1000 Multiprocessor Andrew Sohn Dept. of Computer and Information Science New Jersey Institute of Technology Newark, NJ 07102-1982 Simulated annealing is known to be highly sequential due to loop- carried dependencies. While the conventional speculative computation with a binary tree has been found effective for parallel simulated annealing, its performance is limited to (log n)-fold speedup on n processors due to parallel execution of log n iterations. This report presents a new approach to parallel simulated annealing, called generalized speculative computation (GSC). We use an n-array speculative tree and loop indices to execute n iterations in parallel on n processors while maintaining the same decision sequence as sequential simulated annealing. To verify the performance of GSC, we implement 100 to 500-city Traveling Salesman Problems on the AP1000 massively parallel multiprocessor. Actual execution results demonstrate that the new GSC approach can indeed be an effective method for parallel simulated annealing. We obtain over 20-fold speedup for the initial temperature of 0.1 and 11-fold speedup for the initial temperature of 10, all on 100 processors. ------------------------------------------------------------- A Parallel Genetic Algorithm Retaining Sequential Behaviors on Distributed-Memory Multiprocessors Jongho Nang IIAS-SIS Fujitsu Laboratories Ltd. 1-9-3 Nakase Mihama-ku, Chiba-shi 261 This paper proposes a simple and efficient parallelizing scheme of the genetic algorithm on distributed-memory multiprocessor that allows to maintain the execution behaviors of sequential genetic algorithm. In this parallelizing scheme, the global population is evenly partitioned into several subpopulations each of which is assigned to the processor to be evolved in parallel. An interprocessor communication pattern, called AAB (All-To-All Broadcasting), is used at each generation in order to exchange the information on all individuals evolved in all other processors. It allows the processor to reproduce the individuals in a global sense, in other words, the sequential execution behaviors can be maintained in the parallelized genetic algorithm. This paper shows that the genetic algorithms employing widely used selection schemes such as proportionate selection, ranking selection, and tournament selection can be efficiently parallelized on distributed-memory multiprocessor using the proposed parallelizing scheme. Some experimental speedups on AP1000 are also presented to show the usefulness of the proposed parallelizing scheme. --------------------------------END OF REPORT----------------------- Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super,comp.arch From: schaller@lrz-muenchen.de (Christian Schaller) Subject: Parallelization guide for KSR1 Keywords: KSR1 Reply-To: schaller@lrz-muenchen.de Organization: Leibniz-Rechenzentrum, Muenchen (Germany) Parallelization Guide for the KSR1 ---------------------------------- A first version of a Parallelization Guide for the KSR1 is available. It is adressed to the users of our computing centre. That's why it's written in german. The title is "Optimierung und Parallelisierung auf dem Parallelrechner SNI-KSR1" It's got a lot of examples, so it may even be usefull for those who 'speak' Fortran but don't understand german. Connect to ftp.lrz-muenchen.de, change directory to pub/comp/parallel/umdruck and get par_guide.ps. Your ideas and comments are welcome -- Christian Schaller email: schaller@lrz-muenchen.de Leibniz-Rechenzentrum phone: +49-89-2105-8771 Barer Str. 21 fax: +49-89-2809460 80333 Munich Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Yann Bongiovanni Subject: Help to connect SyDOS PUMA 88 with Parallel Adapter to SCSI Port Date: Wed, 29 Dec 1993 19:44:51 +0100 Organization: It's just me. Nntp-Posting-Host: ybo.muc.de X-Useragent: Version 1.1.3 X-Xxdate: Wed, 29 Dec 93 19:44:45 GMT I currently use a SyDOS PUMA 88 external drive connected to the parallel port of my MS-DOS notebook computer. Since I have bought a Macintosh I'd like to connect it to its SCSI port. Has anyone accomplished this? In the documentation, SyDOS says that the device normally has an SCSI interface and contains a special adapter for parallel ports. In fact, the connection cable is interrupted by a small black box containing an SCSI-to-Parallel-Adapter. The pity is that everything is soldered and the only plug the device has is the one which goes into the parallel port of the computer. The only thing I could do is probably to build connectors, so that if I want to connect the device to my Mac, I can bypass the SCSI-to-Parallel-Adapter... but how? I have no idea on how to make the connections. It would be great if anyone could help me out in this (SyDOS, are you listening?). Thank you in advance. Yann Bongiovanni Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: ced@indy1.dl.ac.uk (C.E.Dean) Newsgroups: comp.parallel,comp.parallel.pvm Subject: Re: What is the equivalent of csend()/intel to nwrite()/nCUBE Date: 30 Dec 1993 10:23:36 GMT Organization: SERC Daresbury Laboratory Sender: ced@indy1 (C.E.Dean) References: <1993Dec29.181637.28333@hubcap.clemson.edu> Nntp-Posting-Host: indy1.dl.ac.uk Hello, I have never used nCUBE, but I can tell you what the parameters to the Intel calls mean. csend : Blocking subroutine to send a message to another node or nodes. call csend(type,buf,len,node,pid) type: Integer -- Identifies the type of message being sent. Used in "opposite" crecv to match messages. buf: Integer array -- Refers to the buffer that contains the message to be sent. Can be any legal data type. len: Positive Integer -- Specifies number of bytes to send. node: Integer -- Destination node for message (node = -1 for broadcast) pid: Integer -- Specifies the id of the process that is to receive the message. Present in call but not used on iPSC/860, as each node only supports one process at a time. irecv: Non-blocking receive. msid = irecv(typesel,buf,len) typesel: Integer -- Specifies the type(s) of message(s) to be received. If non-negative, specific message type will be recognised. If -1, first message to arrive will be recognised. buf: Integer array -- Refers to the buffer where the received message will be stored. len: Integer -- Specifies size (in bytes) of message buffer. msgwait: call msgwait(msid) Blocking call. The non-blocking irecv assigned an id (msid in this eg) to the receive request. Use msgwait to block execution until the receive has been completed. Source of this info: Itel Programmer's Reference Manual Hope this helps, Chris Dean, S.E.R.C. Daresbury Laboratory, Warrington, UK Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fineberg@netcom.com (Samuel A. Fineberg) Subject: Re: What is the equelent of csend()/intel to nwrite()/nCUBE Organization: NETCOM On-line Communication Services (408 241-9760 guest) References: <1993Dec29.181637.28333@hubcap.clemson.edu> Venkata Chaganti (vchagant@uahcs2.cs.uah.edu) wrote: : Hi INTEL/nCUBE GURUS : I was given a code written for intel machine and asked to port to : nCUBE. I don't have intel manuals. : Please, some one could help me how i can write the following commands : in intel to nCUBE. : msid = irecv(kxp,g,2*npr*kxp) : if(...) : csend(1+kxp,f,2*npr*kxp,kb-1,0) : else : csend(1+kxp,f,2*npr*kxp,kb+1,0) : endif : call msgwait(msid) : Actually i would like to know the what each parameter means in : csend(),irecv(),msgwait() csend is a blocking send (though it returns after the user buffer has been copied to a system buffer, not after the data has been received): csend(int type, char *buf, int count, int node, int ptype) type: a tag used for matching messages (i.e., a crecv will only pick up messages sent with the same type). buf: a pointer to the data to be sent count: number of bytes to be sent node: who to send the data to, -1 means broadcast to everyone ptype (for the Paragon, also called pid on the iPSC/860): another matching value, ostensably for when a processor uses more than one thread. I have not yet seen this used on the Paragon and the only legal value for this on the iPSC/860 is 0. I always set this to 0. irecv is a non-blocking receive, crecv is the blocking version and has the same parameters but doesn't return anything: int mid = irecv(int type, char *buf, count) type: -1 will match any tag, anything else will mean that only messages sent with the same tag will be read. count: the number of bytes to read mid: a message id specifying this particular irecv Note: the nCUBE allows you to specify the sender when receiving, Intel does not. Also, an Intel receive does not give you information on who sent the data, for that you must call infonode() after the receive (infonode returns the node number of the sender of the last message received). msgwait(mid) simply blocks until the non-blocking send or receive that returned the message id, mid, completes. Sam Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jmc3@engr.engr.uark.edu (Dr. James M. Conrad) Subject: Paper - Supercomputing '93 Reply-To: jmc3@engr.engr.uark.edu Organization: University of Arkansas College of Engineering I am looking for the following paper from Supercomputing '93. Does anyone know how I can reach the authors and/or get a copy (ftp site?)? "A Distributed Shared Memory Multiprocessor ASURA-memory and Cache Architectures," by S. Mori, H. Saito, M. Goshima, Kyoto U., Japan, M. Yanagihara, T. Tanaka, D. Fraser, K. Joe, H. Nitta, Kubota Corporation. ------------------------------------------------------------------------ James M. Conrad, Assistant Professor jmc3@jconrad.engr.uark.edu Computer Systems Engineering Department jmc3@engr.engr.uark.edu University of Arkansas, 313 Engineering Hall, Fayetteville, AR 72701-1201 Dept: (501) 575-6036 Office: (501) 575-6039 FAX: (501) 575-5339 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: meunierp@enst.enst.fr (Philippe Meunier) Subject: Re: asynchronous IO (message passing) Organization: Telecom Paris (ENST), France References: <1993Dec15.144124.27510@hubcap.clemson.edu> <1993Dec20.141813.9931@hubcap.clemson.edu> In article <1993Dec20.141813.9931@hubcap.clemson.edu>, jim@meiko.co.uk (James Cownie) writes: |> There are many reasons not to support the interrupt style |> including :- |> |> 1) It's a horrible way to write code. It's like writing big |> chunks of stuff in signal handlers... Well, not always. About 5 monthes ago i have seen a really nice program that was using asynchronous message passing (hrecv() on an intelPSC/860 machine). There was a master and several slaves processes. The slaves were asking the master for some work to do, doing the work and then asking the master again for some work... For the sake of efficiency, the master was doing some work too. When a slave asked for some work, the master's work was interupted, some work was sent to the slave by the signal handler and then the master just continued what he was doing at the time he was interupted. So the slaves weren't waiting too much for some work to do, and the master was doing some work himself, which was increasing the overall speed. |> 2) It's hard to implement (and in particular to get right !) |> 3) It's hard to specify (e.g. Can you communicate from within |> the message handler ? Can you probe for other messages here ? |> etc...) |> 4) all the other things I can't remember at the moment ! Yup. For those who are interested, here is a posting i have found some monthes ago: ---------------------------------------------------- Article: 470 of comp.parallel.pvm From: mcavende@ringer.cs.utsa.edu (Mark Cavender) Subject: Re: Asynchronous Receive Organization: University of Texas at San Antonio Date: Thu, 5 Aug 1993 03:14:17 GMT [...] I have been working on a version of PVM 2.4.2 that has asynchronous receives and sends, and hrecv for my thesis. It also has hsnd. It seems to work OK but hasn't been tested much. It currently only runs on Sun systems running SunOS 4.1.2. I am trying to port it to Solaris 2.1 but it has been a bear. Mark Cavender University of Texas at San Antonio ----------------------------------------------------- Hope this helps... Philippe Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: gld@bonjour.cc.columbia.edu (Gary L Dare) Newsgroups: comp.parallel,comp.sys.super Subject: User Reports on the IBM PowerParallel? Organization: The Bloomingdale Insane Asylum (now Columbia University) Would anyone know of any user reports on the IBM PowerParallel "super" that are available to the general computing community? I know that Cornell's Theory Center has one, but they're all on vacation right now ... I do have the grand Net FTP List, but if someone can quickly point to a spot at Cornell rather than have me traverse all over the place that'd be much appreciated ... gld PS: Please post your responses in case anyone else is interested. Thanks in advance for any info! -- ~~~~~~~~~~~~~~~~~~~~~~~~ Je me souviens ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gary L. Dare > gld@cunixd.cc.columbia.EDU Support NAFTA > gld@cunixc.BITNET Eat Mexican (El Teddy's ad) Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: dbader@eng.umd.edu (David Bader) Subject: Re: a question on C* Date: 31 Dec 1993 16:49:59 GMT Organization: Project GLUE, University of Maryland, College Park References: <1993Dec27.144642.25200@hubcap.clemson.edu> In article <1993Dec27.144642.25200@hubcap.clemson.edu> mlevin@husc8.harvard.edu (Michael Levin) writes: > I have a couple of questions on C* (which I am using on a Sun front >end to access a CM-2): (I haven't found any of this in the meager >documentation I've been able to obtain) If you have access to a CM-2, you will most certainly need the C* programming manual to program in C*. I don't have mine in front of me currently, but to use math functions on parallel arrays, you will need to include TMC's "pmath.h". Check out this header file for the versions of "overloaded" math functions for parallel arrays. -david David A. Bader Electrical Engineering Department A.V. Williams Building - Room 3142-A University of Maryland College Park, MD 20742 301-405-6755 Internet: dbader@eng.umd.edu Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.sys.super From: jet@nas.nasa.gov (J. Eric Townsend) Subject: mailing list info on TMC CM-5, Intel iPSC/860, Intel Paragon Sender: news@nas.nasa.gov (News Administrator) Organization: NAS/NASA-Ames Research Center J. Eric Townsend (jet@nas.nasa.gov) last updated: 29 Nov 1993 (updated mailing addresses) This file is posted to USENET automatically on the 1st and 15th of each month. It is mailed to the respective lists to remind users how to unsubscribe and set options. INTRODUCTION ------------ Several mailing lists exist at NAS for the discussion of using and administrating Thinking Machines CM-5 and Intel iPSC/860 parallel supercomputers. These mailing lists are open to all persons interested in the systems. The lists are: LIST-NAME DESCRIPTION cm5-managers -- discussion of administrating the TMC CM-5 cm5-users -- " " using the TMC CM-5 ipsc-managers -- " " administrating the Intel iPSC/860 ipsc-users -- " " using the Intel iPSC/860 paragon-managers -- " " administrating the Intel Paragon paragon-users -- " " using the Intel Paragon The ipsc-* lists at cornell are going away, the lists here will replace them. (ISUG members will be receiving information on this in the near future.) The cm5-users list is intended to complement the lbolt list at MSC. SUBSCRIBING/UNSUBSCRIBING ------------------------- All of the above lists are run with the listserv package. In the examples below, substitute the name of the list from the above table for the text "LIST-NAME". To subscribe to any of the lists, send email to listserv@nas.nasa.gov with a *BODY* of subscribe LIST-NAME your_full_name Please note: - you are subscribed with the address that you sent the email from. You cannot subscribe an address other than your own. This is considered a security feature, but I haven't gotten around to taking it out. - your subscription will be handled by software, so any other text you send will be ignored Unsubscribing It is important to understand that you can only unsubscribe from the address you subscribed from. If that is impossible, please contact jet@nas.nasa.gov to be unsubscribed by hand. ONLY DO THIS IF FOLLOWING THE INSTRUCTIONS DOES NOT PRODUCE THE DESIRED RESULTS! I have better things to do than manually do things that can be automated. To unsubscribe from any of the mailing lists, send email to listserv@nas.nasa.gov with a body of unsubscribe LIST-NAME OPTIONS ------- If you wish to receive a list in digest form, send a message to listserv@nas.nasa.gov with a body of set LIST-NAME mail digest OBTAINING ARCHIVES ------------------ There are currently no publicly available archives. As time goes on, archives of the lists will be made available. Watch this space. -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Sat, 1 Jan 1994 12:47:59 -0600 From: paprzycki_m@gusher.pb.utexas.edu Subject: Parallel Programming Teaching Dear Netters, I am collecting a bibliography of materials related to Parallel Programming Teaching. Where parallel programming should be understood very broadly and encompassing issues related to teaching of any subject realed to parallel/distributed/high performance programming/computing. If you have any pointers please send them to: paprzycki_m@gusher.pb.utexas.edu I will summarize the replies to the list. THANK YOU and HAPPY NEW YEAR!!!! Marcin Paprzycki Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Sat, 1 Jan 94 19:02:07 PST From: Bob Means Subject: email address of Hecht-Nielson Neurocomputer Andrei, You can contact Hecht-Nielsen Neurocomputer, now HNC, at my email address rwmeans@hnc.com Best wishes for the new year. Bob Means Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: jeg@theory.TC.Cornell.EDU (Jerry Gerner) Newsgroups: comp.parallel,comp.sys.super Subject: Re: User Reports on the IBM PowerParallel? Organization: Cornell Theory Center References: <1994Jan3.145257.9846@hubcap.clemson.edu> In article <1994Jan3.145257.9846@hubcap.clemson.edu>, gld@bonjour.cc.columbia.edu (Gary L Dare) writes: |> Would anyone know of any user reports on the IBM PowerParallel |> "super" that are available to the general computing community? |> I know that Cornell's Theory Center has one, but they're all on |> vacation right now ... I do have the grand Net FTP List, but if |> someone can quickly point to a spot at Cornell rather than have |> me traverse all over the place that'd be much appreciated ... Some of us are back! Nothing publicly available at this time. We have the usual internal reports by staff, "friendly users", etc. As soon as this, and/or other, material is ready for prime time we'll post something. Thanks for your interest. As MarkW mentioned, please feel free to browse around our gopher server. At the moment very little in the way of "user reports" is available,... in fact nothing at all. We have the usual internal documents, but nothing that's ready for "prime time". I'll post a note to the usual newsgroups when the first publicly-available user reports, etc. are ready, what they are, where they are, etc. The Theory Center doesn't have anything currently for "prime-time" consumption, though you're welcome to browse though our gopher server as MarkW has suggested. Once we have some useful information from both staff and users on our experiences with our SP1 (64 nodes at the moment plus the hi-speed switch) I'll post a note with the "what, where, etc." For the moment however, reports from staff and "friendly users" are being distributed internally-only, and to the vendor, and to the NSF and other interested parties, etc. Watch this space for more news. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: yedwest@utirc.utoronto.ca (Dr. Edmund West) Subject: for comp.parallel Date: Mon, 3 Jan 1994 12:16:17 -0500 (EST) Preliminary Information about SS '94 Supercomputing Symposium '94 Canada's Eighth Annual High Performance Computing Conference and Exhibition June 6-8, 1994 Sponsored by: Supercomputing Canada (Super*Can) Hosted by: University of Toronto * Conference Plans: The symposium is concerned with all aspects of high performance computing. World class invited speakers and informative technical sessions will expose attendees to the latest developments in this field. There will also be an exhibition of vendor products. Presentations will be made that describe the hardware, software, programming tools and techniques, applications, networks and experiences related to high performance computing. Topics will include, but are not limited to: o vector computing technology o parallel computing technology o workstation clusters o parallel programming techniques o distributed computing o languages for high performance computing o experiences with high performance computers o applications of high performance computing technology to solve problems in government, academia and industry o networking and communications o scientific data visualization o compiler and operating system issues * Deadlines: Call for Papers distributed: January 5, 1994 Registration materials distributed: January 21, 1994 Abstract submissions due: February 14, 1994 Notification of acceptance: March 14, 1994 Camera-ready papers due: May 6, 1994 Early registration: May 6, 1994 * Hotel: Holiday Inn on King 370 King Street West Toronto, Ontario M5V 1J9 Canada [Tel: 416-599-4000; Fax: 416-599-7394] A limited number of rooms are being held for SS'94 attendees at a special rate of C$99.00 (single or double). Please contact the hotel directly to reserve your room. Mention "Supercomputing Symposium '94" in order to obtain the conference rate. Rooms not reserved by May 5, 1994 will be released. * Conference Social Scene o Ice-breaker Reception (Sunday evening, June 5, 1994) o Conference Banquet (Monday evening, June 6, 1994) o Special Entertainment * Extracurricular Activities o Toronto theater: "Miss Saigon", "Showboat", "Phantom of the Opera" o Blue Jays Baseball o The Nightlife of Toronto For more information contact the Conference Chairman: Dr. Edmund West 4 Bancroft Avenue, Room 202 University of Toronto Toronto, Ontario M5S 1A1 416-978-4085 * yedwest@utirc.utoronto.ca [ftp info at ftp.utirc.utoronto.ca in /pub/SS94] ========================================================================== Some participants in Supercomputing Symposium '94 may also be interested in the CFD 94 conference, being held in Toronto on the Thursday and Friday (June 2 and 3) immediately preceeding SS '94. CONFERENCE ANNOUNCEMENT: CFD 94 CFD 94 is the annual meeting of the Computational Fluid Dynamics Society of Canada. The objectives of CFD 94 are to bring together researchers and practitioners in computational fluid dynamics (CFD) and to promote this methodology. CFD-94 addresses all aspects of CFD including, but not restricted to, acoustics, aerodynamics, astrophysics, automotive engineering, biomedicine, hypersonics, industrial engineering, new CFD algorithms, process engineering, transport in porous media, and weather prediction. Individuals, companies, vendors and government research labs are invited to submit scientific presentations. CFD-oriented companies and consultants are encouraged to participate in CFD-94 with exhibits of hardware, software and applications. Abstract submission deadline: 14 January 1994 Preregistration deadline: 30 April 1994 For further information, contact either: Prof. C. Ross Ethier email: ethier@me.utoronto.ca voice: (416) 978-6728 fax: (416) 978-7753 Prof. James J. Gottlieb email: gottlieb@bach.utias.utoronto.ca voice: (416) 667-7740 fax: (416) 667-7799 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Andreas.Ruecker@arbi.informatik.uni-oldenburg.de (Andreas Ruecker) Subject: Questions about ORCA Organization: University of Oldenburg, Germany is there anyone who has experience with the parallel language ORCA (it's a language for parallel programming of distributed systems) from Henri E. Bal? I need this information for a paper. Special questions are: 1. Is their any criticism on this language-design? 2. Are their other related languages? 3. Evaluation of the language in real-world situations? 4. Whats about the spreading and importance of ORCA? 5. How object-oriented is ORCA? 6. The pros and cons of ORCA? All other information related to the topic of parallel languages in distributed systems will also be nice. Thanx Andreas Ruecker ----- e-mail: Andreas.Ruecker@arbi.informatik.uni-oldenburg.de fax: +49 441 63450 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cronk@shrimp.icase.edu (Cronk David) Subject: Help with threads Package Organization: ICASE/NASA Langley I am currently trying to port an existing threads package to the Intel Paragon machine. I was wondering if anybody has done any thread work on the Paragon. I am in particular need of help with regards to save and restore. I seem to be loosing my local stack pointer when I do a long jump. When I return from a context switch I have lost my return address for when I complete the current routine. The current program counter is restored fine, it's the address of the calling routine that I seem to loose. Any help would be greatly appreciated. Dave Cronk. cronk@icase.edu ICASE NASA Langly Research Center (804) 864-8361 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sinha-arani@CS.YALE.EDU (Arani Sinha) Subject: Language vs. Library in a distributed environment Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158 In the survey paper "Programming Languages for Distributed Computing Systems" by Bal, Steiner and Tanenbaum ( ACM Computing Surveys, Sept,'89), the authors discuss the advantages of a language over a library as achoice of a distributed programming environment. They say: 1. If a library is used, the constructs of a sequential language has to be used for implementation. The constructs and data types of a sequential language are considered inadequate for distributed programming. 2. They also claim that a parallel language offers improved readability, type checking etc. I want to know what the netters think of these views. Do you think that a parallel language is better than a library for distributed programming? Arani Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: fisher@netcom.com (John Fisher) Subject: Parallel C++ (pC++) Organization: NETCOM On-line Communication Services (408 241-9760 guest) Hi Everyone, I am looking for any information I can find on Parallel C++ (pC++). I know it is the work of Indiana University, but I would be interested in any papers that might have been published or available via FTP. Thanks, John Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: sinha-arani@CS.YALE.EDU (Arani Sinha) Subject: Comparison between Linda and PVM Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158 I am trying to compare Linda and PVM as an environment for writing parallel programs. I want to implement CAD VLSI applications in a distributed environment. I will be obliged if interested netters let me know of the advantages/ disadvantages of each. Thanks, Arani Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: Zhe Li Subject: PVM vs. C-linda Hi world: I am in the process of choosing a parallel processing toolkit to perform a data intensive computations (e.g., heavy inter-host communication) as part of my PHD thesis work. I playe around with PVM but have not gained any experience with C-linda yet. Could anyone share some hands-on experience about the pros and cons of these two packages? Issues such as their applicability, efficient handling of inter-host communication, ease of programming etc. are particularly interesting to me. Maybe some linda guys from Yale could give some pointers on this? Many thanks! Please email to me and I will summarize if there is enough interest. /Jay Li li@cs.columbia.edu, Dept. of Computer Science, Columbia Univ. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: andreas@c3440.edvz.uni-linz.ac.at (Andreas Hausleitner) Subject: example for vectorisation Organization: Technical University Vienna/Austria Does anyone know, from where I can get a well suited program for vectorization on a vector supercomputer (like Cray, Convex ..), which achieves good speedup and can also be parallelized for such shared memory multiprocessor machines. Available as source or in a paper. Thanks Andreas Hausleitner _____________________________ Andreas Hausleitner Unversity of LINZ / Austria Department for Supercomputing email: hausleitner@edvz.uni-linz.ac.at Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel.pvm,comp.parallel,lrz.parallel From: schaller@lrz-muenchen.de (Christian Schaller) Subject: PVM on KSR computers Reply-To: schaller@lrz-muenchen.de Organization: Leibniz-Rechenzentrum, Muenchen (Germany) We are using the Public Domain Version of PVM (3.2.4) on our KSR for quite a while. We always had good experiences. Recently I tried to install the ported Version of PVM 3.1, which makes use of the Virtual Shared Memory System. I was running in a lot of problems. In the end I didn't release the version to the users of our site. I'm writing this note to give my experiences to the public and, maybe get some feedback from other sites running this version of PVM. I know that this version is said to be 'not supported'. - installing this version was rather complicated compared to the Public Domain Version. I had to do some source code changes. Also the examples were just copied from the Public Domain Version. I had to insert the calls to 'ksrinit()' to allocate the shared memory buffers. - the second start of an application after starting the demon produced the message 'Increment MAX_SHARED_MEM and recompile'. I tried two work-arounds: (a) cancel the old demon and start a new one (o.k.) (b) change the size of MAX_SHARED_MEM and recompile the whole system. In the case of (b) the machine was just allocating my shared memory buffer and doing nothing else. No way! - So I always used the alternative (a). But even then the machine was not working as we were used to. Processes like 'ps' were not terminating and there was no way to kill them. We had to restart the machine. This is some of my experiences with PVM on the KSR1. I hope this is helpfull information for some of you. -- Christian Schaller email: schaller@lrz-muenchen.de Leibniz-Rechenzentrum phone: +49-89-2105-8771 Barer Str. 21 fax: +49-89-2809460 80333 Munich Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Newsgroups: ia.jobs,uiowa.jobs,comp.parallel From: jbrown@umaxa.weeg.uiowa.edu (Judy Brown) Subject: Senior Computing Consultant Position Organization: University of Iowa, Iowa City, IA, USA Senior Computing Consultant Advanced Research Computing Services Weeg Computing Center The University of Iowa The Advanced Research Computing Services group at Weeg Computing Center is seeking a Senior Computing Consultant whose primary responsibility is to provide technical consulting to researchers on the Encore Multimax, IBM 3090, and distributed workstations, and on high performance computing (HPC) platforms off campus. The consultant will serve as an HPC information resource, assist with supercomputer allocations, and lead distributed processing efforts. Requirements include a Master's degree in Computer Science or a related field, or an equivalent combination of education; experience with UNIX and multiple computing platforms; and good organizational and communications skills. Experience with Mac, MVS, vectorization, parallelization, C, FORTRAN, distributed processing, PVM, Linda, and visualization tools is desired. Send resumes to Judy Brown, Weeg Computing Center, 134A LC, The University of Iowa, Iowa City, IA 52242. Resume screening will begin January 10, 1994. Resumes in electronic form may be sent via electronic mail to judy-brown@uiowa.edu. The University of Iowa is an Affirmative Action Equal Opportunity Employer. Women and minorities are encouraged to apply. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: DKGCampb@cen.ex.ac.uk Subject: GRIDS report wanted Organization: University of Exeter, UK. I'm looking to get hold of the follwing document: A. Reuter, U. Geuder, M. Hardtner, B. Worner & R. Zink "GRIDS User's Guide", Report 4/93, University of Stuttgart, 1993. If anyone can point me in the direction of an FTP site or send me a copy I'd be most grateful. -- Duncan Campbell Acknowledgement: I'd like to thank me, Department of Computer Science without whom none of this University of Exeter, would have been possible. Prince of Wales Road, Exeter EX4 4PT Tel: +44 392 264063 Telex: 42894 EXUNIV G United Kingdom Fax: +44 392 264067 e-mail: dca@dcs.exeter.ac.uk Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: takis@poobah.wellesley.edu (Takis Metaxas) Subject: Interactive CD-ROM Proceedings for Parallel Computing *************************************************************************** News FOR IMMEDIATE RELEASE Contact: TELOS, The Electronic Library of Science 3600 Pruneridge Avenue, Suite 200 Santa Clara, CA 95051 Phone: (408) 249-9314 Fax: (408) 249-2595 Email: c.peterson@applelink.apple.com TELOS Publishes Interactive CD-ROM Proceedings for Parallel Computing Santa Clara, California, January 1, 1994 - TELOS (The Electronic Library of Science) announces the release of an interactive CD-ROM entitled "Parallel Computation: Practical Implementation of Algorithms and Machines" (ISBN 0-387-14213-4). This revolutionary CD-ROM utilizes multimedia software on a Macintosh platform and has been devised to replace the traditional practice of publishing proceedings on paper. The 1992 DAGS Parallel Computing Symposium proceedings are brought to you in this exciting format through the enterprising efforts of Peter Gloor (Union Bank of Switzerland and Dartmouth College), Fillia Makedon (Professor of Computer Science at Dartmouth College) and James W. Matthews (Dartmouth). The purpose of Dartmouth's Institute for Advanced Graduate Studies (DAGS) is to strengthen the interaction between the theoretical and practical communities by acting both as a forum and as a working environment to stimulate new applications of parallel computation. In 1992, the DAGS Symposium addressed "Issues and Obstacles in the Practical Implementation of Parallel Algorithms and the Use of Parallel Machines," with the focus on problems that arise in parallel scientific computing. This CD-ROM brings the invited speakers from that meeting to your desktop, along with proprietary hypertext software that allows viewers to annotate the papers. Eight invited speakers' presentations are displayed in video and sound, with an adjacent window displaying the full-color overheads used by each presenter with their talk. In addition to viewing the presentations on the CD, viewers may stop the video at any time, call up the abstract of the original paper, read it, and even print it. Highlighted segments of the talk and the backgrounds of presenters may also be accessed. The CD-ROM is packaged with a 16-page booklet that describes how to use the disc, including how to maneuver through the talks and how to use the software to annotate the papers. Also included on the CD is a HyperCard(r) presentation that explains how the CD was put together, to give viewers interested in capturing their next conference, workshop, or meeting, valuable insight into what is involved in this procedure. INVITED SPEAKERS Programming Parallel Algorithms Guy Blelloch, Carnegie-Mellon University The Network Architecture of the Connection Machine CM-5 Charles Leiserson, MIT >From Parallel Algorithms to Applications and Vice Versa: Real Life Stories Andrew Ogielski, Bellcore Sorting Circuits, Fault Tolerance, and Tournament Ranking Tom Leighton, MIT Parallelization of Sequential Code for Distributed-Memory Parallel Processors Marco Annaratone, DEC Massively Parallel Systems Group Internals of the Connection Machine Compiler Gary Sabot, Thinking Machines Architecture of the KSR-1 Computer System James Rothnie, Kendall Square Research The Organization of Parallel FFT's: The Real Data Case Charles Van Loan, Cornell University ***************************************************************************** TO ORDER: Call 1-800-777-4643 or return this form via mail to TELOS, The Electronic Library of Science, a Springer-Verlag imprint, 3600 Pruneridge Ave., Ste. 200, Santa Clara, CA 95051 or email the information to c.peterson@applelink.apple.com. Editors: Gloor/Makedon/Matthews Title: Parallel Computation: Practical Implementation of Algorithms and Machines (Macintosh CD and booklet) Price: $69.95 ISBN: 0-387-14213-4 Qty__________ @ $69.95 = $ ______________ Tax* Shipping** Total$ ______________ * CA, MA, NJ, NY, PA, TX, VA, and VT residents include applicable sales tax. Canadian residents include 7% GST. ** Add $2.50 for the first product, $1.00 for subsequent titles. Foreign airmail orders please include $10.00 per product. Method of Payment: o Enclosed check or money order made payable to Springer-Verlag New York, Inc. o Credit Card VISA, MasterCard, Discover, orAmerican Express Card No ____________________________________________ Expiration Date ____________________________________ Signature __________________________________________ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: R.Kerr@newcastle.ac.uk (R. Kerr) Subject: Re: Language vs. Library in a distributed environment Organization: University of Newcastle upon Tyne, UK, NE1 7RU References: <1994Jan4.142201.779@hubcap.clemson.edu> sinha-arani@CS.YALE.EDU (Arani Sinha) writes: >In the survey paper "Programming Languages for Distributed Computing >Systems" by Bal, Steiner and Tanenbaum ( ACM Computing Surveys, Sept,'89), >the authors discuss the advantages of a language over a library as achoice >of a distributed programming environment. >They say: >1. If a library is used, the constructs of a sequential language has to be > used for implementation. The constructs and data types of a sequential > language are considered inadequate for distributed programming. >2. They also claim that a parallel language offers improved readability, > type checking etc. >I want to know what the netters think of these views. Do you think that >a parallel language is better than a library for distributed programming? I agree with the above observations and have made comments along those lines in a direct reply to your earlier posting about PVM and Linda. I would add to the specific comments above the fact that a programming language generally represents a computational model which enables programmers to express themselves in terms of the intuitive perception they have of the computation. Given a good language, programmers who think parallel are not severely constrained by having to express themselves through an essentially serial medium. A "parallel language" does not need to be a full-blown different language. A conventional one, supplemented with a few well-chosen constructs for expressing inherent parallelism, can go a long way. Linda is a good example of that. ------------------------------------------------------------------------ Ron Kerr, Computing Service, Newcastle University, NE1 7RU, England. Tel. +44 91 222 8187 Fax. +44 91 222 8765 ------------------------------------------------------------------------ Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel Date: Tue, 4 Jan 1994 13:15:38 -0500 From: "Dennis Gannon" Subject: Re: Parallel C++ (pC++) References: <1994Jan4.142223.873@hubcap.clemson.edu> fisher@netcom.com (John Fisher) writes: >I am looking for any information I can find on Parallel C++ (pC++). there is an ftp archive at ftp.cica.indiana.edu or moose.cs.indiana.edu in directory pub/sage. There is also a WWW hypertext document that can be reached via http://cica.indiana.edu:www/home-page.html or more directly http://cica.indiana.edu:www/sage/home-page.html Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: ukola@kastor.ccsf.caltech.edu (Adam Kolawa) Subject: Re: Parallel Programming Teaching Date: 4 Jan 1994 19:12:13 GMT Organization: ParaSoft Corporation References: <1994Jan3.145316.10067@hubcap.clemson.edu> In article <1994Jan3.145316.10067@hubcap.clemson.edu> paprzycki_m@gusher.pb.utexas.edu writes: >I am collecting a bibliography of materials related to >Parallel Programming Teaching. > You can get info on our parallel programming classes from our anonymous ftp server at ftp.parasoft.com (192.55.86.17) in the /express/classes directory. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back From: Umesh Krishnaswamy Subject: SLALOM codes Newsgroups: comp.benchmarks,comp.parallel I am looking for Slalom code for Sun Sparcstation. The FTP site (tantalus.al.iastate.edu) which usually has it does not seem to allow anonymous logins. If anybody has it or knows where it is available, could you please let me know. Thanks. Umesh. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: mohr@fullsail.cs.uoregon.edu (Bernd W. Mohr) Subject: Re: Parallel C++ (pC++) Organization: University of Oregon Computer and Information Sciences Dept. References: <1994Jan4.142223.873@hubcap.clemson.edu> fisher@netcom.com (John Fisher) writes: >I am looking for any information I can find on Parallel C++ (pC++). >I know it is the work of Indiana University, but I would be interested >in any papers that might have been published or available via FTP. Technical documents (including published articles) and the programs for pC++ and Sage++ are available via anonymous FTP from moose.cs.indiana.edu:/pub/sage (129.79.254.191) ftp.cica.indiana.edu:/pub/sage (129.79.26.102) We maintain two mailing lists for pC++/Sage++. For information about the mailing lists, and how to join one, please send mail to sage-request@cica.indiana.edu. No Subject or body is required. We are also running a WWW server. Try to connect to "http://www.cica.indiana.edu/sage/home-page.html" Hope this helps Bernd --- Bernd Mohr mohr@cs.uoregon.edu "You can't stop it. It's technology." --- Dave Barry Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jet@nas.nasa.gov (J. Eric Townsend) Subject: Re: PVM vs. C-linda Organization: NAS/NASA-Ames Research Center References: <1994Jan4.142253.1056@hubcap.clemson.edu> "li" == Zhe Li writes: li> communication) as part of my PHD thesis work. I playe around with li> PVM but have not gained any experience with C-linda yet. Could On what hardware? -- J. Eric Townsend jet@nas.nasa.gov 415.604.4311| personal email goes to: CM-5 Administrator, Parallel Systems Support | jet@well.sf.ca.us NASA Ames Numerical Aerodynamic Simulation |--------------------------- PGP2.2 public key available upon request or finger jet@simeon.nas.nasa.gov Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: nlau@st6000.sct.edu (Neric Lau) Subject: Human Brain Date: 4 Jan 1994 21:29:38 -0500 Organization: Southern College of Technology, Atlanta Is Human Brain a kind of parallel computer ? Please send reply to nlau@st6000.sct.edu Neric Lau Approved: parallel@hubcap.clemson.edu Path: bounce-back From: b00cjl00%ceres@uunet.UU.NET (Jun-Lin Chen) Newsgroups: comp.parallel,comp.sys.super Subject: Re: User Reports on the IBM PowerParallel? Followup-To: comp.parallel,comp.sys.super Organization: Computer Center, National Chiao-Tung University, Taiwan References: <1994Jan3.145257.9846@hubcap.clemson.edu> Gary L Dare (gld@bonjour.cc.columbia.edu) wrote: : Would anyone know of any user reports on the IBM PowerParallel : "super" that are available to the general computing community? : PS: Please post your responses in case anyone else is interested. I am interested the reports too. My e-mail: b00cjl00@nchc.edu.tw Thank you and Happy New Year ! J.L. Chen Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jjsc@inf.rl.ac.uk (John Cullen) Subject: CFP: World Transputer Congress 1994 (WTC '94) Organization: Rutherford Appleton Laboratory, Informatics Department, UK Reply-To: sch@inf.rl.ac.uk (Dr. Susan Hilton) WTC '94 CALL FOR PAPERS AND TUTORIAL PROPOSALS Villa Erba, Cernobbio, Lake Como, Italy 5 - 7 September, 1994 The Transputer Consortium (TTC) is pleased to announce that the WORLD TRANSPUTER CONGRESS 1994 (WTC '94) will be held on 5 - 7 September 1994 at the Villa Erba, Cernobbio, Lake Como, Italy . WTC '94 is the leading international transputer conference and exhibition and is the second in a series sponsored by and run under the overall management of TTC. SGS-Thomson is also sponsoring WTC '94. It is planned that each year WTC will be held in conjunction with a local partner. For the first, highly successful, WTC '93, the local partner was the German Transputer-Anwender-Treffen (TAT) conference. WTC '93, held at the Eurogress Conference Centre in Aachen, Germany, attracted 475 delegates from 32 countries worldwide. WTC '94 will be held in conjunction with the Italian Transputer User Group (ItTUG), which is due to be formed in early 1994. WTC '94 will incorporate the Inaugural Meeting of ItTUG. WTC '94 will be the first major conference where significant applications of the new T9000 transputer and its associated technologies (e.g. packet routers) will be extensively reported. OBJECTIVES * present `state-of-the-art' research on all aspects of parallel computing based upon communicating process architectures; * to demonstrate `state-of-the-art' products and applications from as wide a range of fields as possible; * to progress the establishment of international software and hardware standards for parallel computing systems; * to provide a forum for the free exchange of ideas, criticism and information from a world audience gathered from Industry, Commerce and Academia; * to promote an awareness of how transputer technologies may be applied and their advantages over other sequential and parallel processors; * to establish and encourage an understanding of the new software and hardware technologies enabled by the transputer, especially the new T9000 processor and C104 packet router from INMOS, the parallel DSP engines from Texas Instruments, and new products from Intel and other manufacturers. The conference themes will include: education and training issues, formal methods and security, performance and scalability, porting existing systems, parallelisation paradigms, tools, programming languages, support environments, standards and applications. Applications include: embedded real-time control systems, workstations, super-computing, consumer products, artificial intelligence, databases, modelling, design, data gathering and the testing of scientific or mathematical theories. BACKGROUND The World Transputer Congress (WTC) series was formed in 1992 from the merger of the TRANSPUTING series of conferences, organised by the worldwide occam and Transputer User Groups, and the TRANSPUTER APPLICATIONS series of conferences, organised by the UK SERC/DTI Transputer Initiative. WTC '93 attracted a large and enthusiastic audience from the majority of countries where transputer technology is accepted and/or parallel processing is seen as the key to meeting future computing demands. There is clearly a continuing, and growing, interest and commitment to this technology which will rely on the WTC series to maintain the vital information flow. It is reasonable to assume that it has already established itself as the leading conference in this important area. The successes of its predecessors has been a major factor in this. The continuing and vital support of TTC and the large number of User Groups from around the world will ensure a continuing success story for WTC. FORMAT The format adopted for WTC '93 will be continued at WTC '94. There will be a mix of Plenary Sessions, with Keynote and Invited Speakers from around the world, and Parallel Sessions, one of which will be organised by ItTUG. The exact number of Parallel Streams will be dependent on the quality of papers submitted against this Call for Papers. LOCATION WTC '94 will be held at the Villa Erba Conference and Exhibition Centre, Cernobbio, Lake Como, Italy. Cernobbio is 4KM from Como. The modern complex offers unique conference and exhibition facilities providing a main conference hall, meeting rooms and reception halls together with an exhibition area which can be divided into a maximum of 280 stands. It is set in the beautiful landscaped grounds of the Villa Erba on the shores of the lake. The Mannerist style Villa, with its steps down to the lake, was built in 1892 and is of both historic and artistic importance. ACCOMMODATION A range of hotel accommodation (2*, 3* and 4*) has been reserved for WTC '94 in Cernobbio and Como. The majority of these hotels are within easy walking distance of the Villa Erba. However there is a limit to the total number of rooms available in the town, so early booking is recommended. Details will be sent, as soon as they are available, to all people who register their interest in WTC '94 by returning the reply slip at the end of this announcement. GETTING THERE Como has excellent air, rail and road access, being within easy reach of two international airports, the main motorways and the trans-European rail networks. The two International Airports are Milan (Linate) and Lugano (Agno). Although many more international flights arrive at Milan, special arrangements are being negotiated with Crossair for flights to and from Lugano. Crossair flights connect with international flights at many major European airports. Travelling times by road to Como are 20 minutes from Milan and 15 minutes from Lugano. Buses will be provided for delegates, serving both airports. There is a frequent rail service from Milan to Como and regular buses from Como to Cernobbio. Fuller details will be sent, as soon as they are available, to people who register their interest in WTC '94. EXHIBITION An associated exhibition attracting the world's leading suppliers of transputer-based and other relevant hardware, software and application products will be held at the Villa Erba Exhibition Centre. The WTC '93 Exhibition was viewed as a great success by both exhibitors and participants alike and attracted a large number of visitors. Companies and other organisations wishing to exhibit at the WORLD TRANSPUTER CONGRESS 1994 should contact one of the Committee members listed at the end of this announcement. Opportunities will also exist for posters and demonstrations of academic achievements. CALL FOR PAPERS The conference programme will contain invited papers from established international authorities together with submitted papers. The International Programme Committee, presided over by Ing. A De Gloria (University of Genoa), Dr S C Hilton (TTC), Dr M R Jane (TTC), Dr D Marini (University of Milan) and Professor P H Welch (WoTUG), is now soliciting papers on all areas described above. All papers will be fully refereed in their final form. Only papers of high excellence will be accepted. The proceedings of this conference will be published internationally by IOS Press and will be issued to delegates as they register at the meeting. BEST PAPER AWARD The award for the best paper (worth approximately #500) will be based on both the submitted full paper for refereeing and the actual presentation at the Conference. Members of the Programme Committee will be the judges and their decision will be final. The winner will be announced and the presentation made in the final Closing Session on Wednesday, 7 September. PROGRAMME COMMITTEE MEMBERS The Programme Committee consists of invited experts from Industry and Academia, together with existing members from the joint organising user- groups based in Australia, France, Germany, Hungary, India, Italy, Japan, Latin America, New Zealand, North America, Scandinavia and the United Kingdom. The refereeing will be spread around the world to ensure that all points of view and expertise are properly represented and to obtain the highest standards of excellence. INSTRUCTIONS TO AUTHORS Four copies of submitted papers (not exceeding 16 pages, single-spaced, A4 or US 'letter') must reach the Committee member on the contact list below who is closest to you by 1 March 1994. Authors will be notified of acceptance by 24 May 1994. Camera-ready copy must be delivered by 23 June 1994, to ensure inclusion in the proceedings. A submitted paper should be a draft version of the final camera-ready copy. It should contain most of the information, qualitative and quantitative, that will appear in the final paper - i.e. it should not be just an extended abstract. CALL FOR TUTORIALS AND WORKSHOPS Before the World Transputer Congress 1994, we shall be holding tutorials on the fundamental principles underlying transputer technologies, the design paradigms for exploiting them, and workshops that will focus directly on a range of specialist themes (e.g. real-time issues, formal methods, AI, image processing ..). The tutorials will be held on 3 - 4 September 1994 in the Villa Erba itself. We welcome suggestions from the community of particular themes that should be chosen for these tutorials and workshops. In particular, we welcome proposals from any group that wishes to run such a tutorial or workshop. A submission should outline the aims and objectives of the tutorial, give details of the proposed programme, anticipated numbers of participants attending (minimum and maximum) and equipment (if any) needed for support. Please submit your suggestions or proposals to one of the Committee members listed below by 1 March 1994. DELIVERY AND CONTACT POINTS Dr Mike Jane, The Transputer Consortium, Informatics Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, UK Phone: +44 235 445408; Fax: +44 235 445893; email: mrj@inf.rl.ac.uk Dr Daniele Marini, Department of Computer Science, University of Milan, Via Comelico,39, Milan 20135, ITALY. Phone: +39 2 5500 6358; Fax: +39 2 5500 6334 email:marini@imiucca.csi.unimi.it Mr David Fielding, Chair, NATUG, Cornell Information Technologies, 502 Olin Library, Cornell University, Ithaca NY 14853, USA Phone: +1 607 255 9098; Fax: +1 607 255 9346 email: fielding@library.cornell.edu Dr Kuninobu Tanno, Department of Electrical and Information Engineering, Yamagata University, Yonezawa, Yamagata 992, JAPAN Phone: +81 238 22 5181; Fax: +81 238 26 2082 email: tanno@eie.yamagata-u.ac.jp Mr John Hulskamp, Department of Computer Systems Engineering, RMIT, G.P.O. Box 2476V, Melbourne, 3001 AUSTRALIA Phone: +61 3 660 5310 ; Fax: +61 3 660 5340; email: jph@rmit.edu.au Dr Rafael Lins, Chair, OUG-LA, Department de Informatica, UFPE - CCEN, Cidade Universitaria, Recife - 50739 PE BRAZIL Phone: +55 81 2718430; Fax: +55 81 2710359; email: rdl@di.ufpe.br FOR FURTHER INFORMATION PLEASE CONTACT: Dr Susan C Hilton Building R1 Rutherford Appleton Laboratory CHILTON, DIDCOT, OXON. OX11 0QX UK Phone: +44 235 446154 Fax: +44 235 445893 email sch@inf.rl.ac.uk --- _______________________________________________________________________________ JANET : J.Cullen@uk.ac.rl.inf | John Cullen INTERNET: J.Cullen%inf.rl.ac.uk@nsfnet-relay.ac.uk | Comms Research & Support UUCP : {...|mcsun}|uknet|rlinf|J.Cullen | Informatics Department X.400 : /I=J/S=Cullen/OU=informatics/O=rutherford | Rutherford Appleton Lab. /PRMD=UK.AC/ADMD= /C=GB/ | Chilton, Didcot, Oxon Tel/Fax : +44 235 44 6555 / +44 235 44 5727 | OX11 0QX , England Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: jwkhong@csd.uwo.ca (James W. Hong) Subject: CFP (3rd): International Conference on Computing and Information (ICCI'94) Organization: Dept. of Computer Science, University of Western Ontario CALL FOR PAPERS ********************* ICCI'94 6th INTERNATIONAL CONFERENCE ON COMPUTING AND INFORMATION May 26 - 28 , 1994 Trent University Peterborough, Ontario Canada ***************************************************************************** Keynote Address: "Text Databases" Professor Frank Tompa Director, Centre for the New Oxford English Dictionary and Text Research University of Waterloo, Canada ***************************************************************************** Steering Committee: Waldemar W. Koczkodaj, Laurentian University, Canada (Chair) S.K. Micheal Wong, University of Regina, Canada R.C. Kick, Technological University of Tennessee, USA ***************************************************************************** General Chair: Pradip Srimani, Colorado State University, USA Organizing Committee Chair: Richard Hurley, Trent University, Canada Program Committee Chair: David Krumme, Tufts University, USA Public Relations Chair: James Hong, University of Western Ontario, Canada ****************************************************************************** ICCI'94 will be an international forum for presentation of new results in research, development, and applications in computing and information. The organizers expect both practitioners and theorists to attend. The conference will be organized in 5 streams: Stream A: Data Theory and Logic, Information and Coding Theory Theory of Programming, Algorithms, Theory of Computation Stream B: Distributed Computing and Communication Stream C: Concurrency and Parallelism Stream D: AI Methodologies, Expert Systems, Knowledge Engineering, and Machine Learning Stream E: Software and Data Engineering, CASE Methodology, Database, Information Technology Authors are invited to submit five copies of their manuscript to the appropriate Stream Chair by the submission deadline. Papers should be written in English, and contain a maximum of 5000 words. Each paper should include a short abstract and a list of keywords indicating subject classification. Please note that a blind review process will be used to evaluate submitted papers. Authors' names and institutions should be identified only on a cover page that can be detached. No information that clearly identifies the authorship of the paper should be included in the body. Authors of accepted papers will be asked to prepare the final version according to the publisher's requirements. It is expected this year's proceedings will again be published by IEEE Computer Society Press or will make the premier issue of a new CD-ROM journal Journal of Computing and Information. Stream Chairs: ************* STREAM A: ======== Si-Qing Zheng Dept. of Computer Science Louisiana State University Baton Rouge, LA, USA 70803-4020 Fax: 504-388-1465 Email: zheng@bit.csc.lsu.edu Email Contact: Anil Shende (shende@dickinson.edu) STREAM B: ======== H. Douglas Dykeman, IBM Zurich Research Lab Saeumerstrasse 4 8803 Rueschlikon Switzerland Fax: 41-1-710-3608 Email: ddy@zurich.ibm.com Email Contact: Bart Domzy (csbcd@blaze.trentu.ca) STREAM C: ======== Eric E. Johnson Parallel Architecture Research Lab Electrical & Computer Engineering Thomas & Brown 106 Las Cruces, NM, USA 88003-0001 Fax: (505) 646-1435 Email: ejohnson@nmsu.edu Email Contact: Reda Ammar (reda@cse.uconn.edu) STREAM D: ======== Maria E. Orlowska Computer Science The University of Queensland Brisbane Q 4072 Australia Fax: 61-7-365 1999 Email: maria@cs.uq.oz.au Email Contact: Mike Herman (mwherman@nickel.laurentian.ca) STREAM E: ======== Shing-Tsaan Huang Department of Computer Science National Tsing-Hua University HsinChu, TAIWAN (30043) Fax: 886-35-723694 Email: sthuang@nthu.edu.tw Email Contact: Ken Barker (barkerk@cpsc.ucalgary.ca) Program Committee: ================= Chair: David Krumme, Tufts University, USA J. Abello, Texas A&M U., USA O. Abou-Rabia, Laurentian U., Canada K. Abrahamson, E. Carolina U., USA M. Aoyama, Fujitsu Limited, Japan L.G. Birta, U. Ottawa, CANADA J.P. Black, U. Waterloo, Canada D.L. Carver, Louisiana State U., USA C.-C. Chan, U. Akron, USA S. Chen, U. Illinois, Chicago, USA V. Dahl, Simon Fraser U., Canada S.K. Das, U. North Texas, USA A.K. Datta, U. Nevada, Las Vegas, USA W.A. Doeringer, IBM Res. Lab., Zurich, Switzerland D.-Z. Du, U. Minnesota, USA E. Eberbach, Acadia University, Canada A.A. El-Amawy, Louisiana State U., USA D.W. Embley, Brigham Young U., USA W.W. Everett, AT&T Bell Labs., USA A. Ferreira, CNRS-LIP, France I. Guessarian, Paris 6 U., France J. Harms, U. Alberta, Canada S.Y. Itoga, U. Hawaii, USA J.W. Jury, Trent U., Canada M. Kaiserswerth, IBM Res. Lab., Zurich, Switzerland M. Li, U. Waterloo, Canada M.K. Neville, Northern Arizona U., USA P. Nijkamp, Free U. Amsterdam, The Netherlands K. Psarris, Ohio U., USA P.P. Shenoy, U. Kansas, USA G. Sindre, Norwegian Inst. Technology, Norway R. Slowinski, Technical U. Poznan, Poland M.A. Suchenek, Cal. State U., Dominguez Hills, USA V. Sunderam, Emory U., USA R.W. Swiniarski, San Diego State U., USA A.M. Tjoa, U. Vienna, Austria R. Topor, Griffith U., Australia A.A. Toptsis, York U., Canada C. Tsatsoulis, U. Kansas, USA W.D. Wasson, U. New Brunswick, Canada L. Webster, NASA/Johnson Space Center, USA E.A. Yfantis, U. Nevada, Las Vegas, USA Y. Zhang, U. Queensland, Australia =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*= DEADLINES Jan. 17, 1994 (Mon) Paper submission deadline to the appropriate Stream Chair Mar. 15, 1994 (Tue) Email Notification of acceptance May 26, 1994 (At the conf.) final version due ========================================================================= For further information, please contact: Richard Hurley Organizing Committee Chairman Computer Studies Program Trent University Peterborough, ON, Canada K9J 7B8 Phone: (705) 748-1542 Fax: (705) 748-1625 Email: icci@flame1.trentu.ca Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rzantout@magnus.acs.ohio-state.edu (Rached N Zantout) Subject: difficulties using cread() on the Delta Organization: The Ohio State University Hello, I just started using the Delta machine, and I am finding difficulties using the cread (or iread) command. My data file look like: 1.0000000E+00 2.0000000E+00 2.0000000E+00 3.0000000E+00 4.0000000E+00 1.0000000E+00 1.0000000E+00 5.0000000E+00 6.0000000E+00 2.0000000E+00 3.0000000E+00 4.0000000E+00 1.0000000E+00 2.0000000E+00 Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel,comp.lang.fortran,comp.sys.super From: forge@netcom.com (FORGE Customer Support) Subject: FORTRAN PARALLELIZATION WORKSHOP FEB1-3 Keywords: FORTRAN PARALLEL Organization: Applied Parallel Research, Inc. =============================================================== APR Applied Parallel Research, Inc. Workshop 1-3 February 94 =============================================================== PARALLEL PROCESSING IN FORTRAN -- WORKSHOP Placerville, CA 1-3 February 1994 <<< Places are going fast -- There are still seats open >>> <<< CALL soon to reserve your place >>> APR announces a three-day workshop on parallel processing techniques in Fortran, and the use of APR's FORGE parallelization tools. The instructors will be Gene Wagenbreth and John Levesque, Applied Parallel Research, Inc. Each day of the workshop includes time for individual and group "hands-on" practice with APR's FORGE tools. Participants are encouraged to bring their own programs to work on. This workshop will also present APR's new batch tools, dpf and xhpf, that have the capability of automatically parallelizing real Fortran programs for distributed memory systems. OUTLINE: Day 1: AM: Intro to Parallel Processing o Parallel architectures - SIMD & MIMD o Memory architectures - Shared, Distributed, Multi-level o Programming paradigms - Domain decomposition, SPMD o Language issues - Fortran 77, 90, High Performance Fortran o Performance measurement - profiling tools, parallel simulation PM: Intro to FORGE 90 o Overview o Source code browser o Instrumenting serial programs o Workshop using FORGE 90 Day 2: AM: Parallelizing for Distributed Memory using FORGE 90 (DMP and dpf) o Data decomposition o Loop distribution o Using APR Directives in Fortran 77 Programs - dpf o Using AutoMAGIC parallelization within dpf and xHPF o The programming model - SPMD paradigm o Parallel Simulator o Parallelization inhibitors/prohibitors o Efficiency of transformations o Problems and work-arounds PM: Open Workshop using FORGE 90 DMP Day 3: AM: FORGE 90's High Performance Fortran Products - xhpf o Overview o HPF Data Distribution Directives o Using HPF directives in Fortran 77 programs - xhpf o Using HPF directives in Fortran 90 programs - xhpf o Investigation of Parallelization Results using FORGE 90 DMP o Using the Parallel Profiler with xhpf PM: Open Workshop using FORGE 90 DMP, dpf and xhpf modules over IBM RS6K and HP/9000 workstations using PVM. Bring your own codes to work with on cartridge tape. FTP access is available from our network. ------------------------------------------------------------------------- Registration fee is $1000 ( $800 for FORGE 90 customers), and includes materials and access to workstations running FORGE 90 and PVM. Location is at the offices of Applied Parallel Research in Placerville, California, (45 miles east of Sacramento, near Lake Tahoe). Classes run 9am to 5pm. Accommodations at Best Western Motel in Placerville can be arranged through our office. Contact: Applied Parallel Research, Inc., 550 Main St., Placerville, CA 95667 Voice: 916/621-1600. Fax: -0593. Email: forge@netcom.com ============================================================================== -- /// Applied /// FORGE 90 Customer Support Group /// Parallel /// 550 Main St., Placerville, CA 95667 /// Research, Inc. (916) 621-1600 621-0593fax forge@netcom.com Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: stasko@cc.gatech.edu (John Stasko) Subject: KSR program visualization package Organization: College of Computing, Georgia Tech KSR Pthread Program Visualization ================================= Introduction ------------ We have developed a set of animation views (we call it Gthreads) for illustrating the execution of Pthread programs on the KSR-1. We now make the system available to all pthreads programmers. We hope that it will help you to develop and debug programs on the KSR-1. The views illustrate threads, their migration through a call graph, barriers, mutexes, etc. We also hope that we can gather comments and ideas to improve it, and create more interesting animations. PLEASE feel free to ask any questions you may have. We are eager to incorporate suggestions and recommendations about the animation library back into the system. What you need ------------- Obviously, you will need access to a KSR machine in order to run your programs and gather trace information. For the animation component, you need a UNIX workstation (we've only tested it on SPARCstations) running the X Window System and Motif, and you will need a C++ compiler that supports templates. How to get it ------------- Gthreads is available via anonymous ftp from par.cc.gatech.edu. You must retrieve two files: gthread.KSRtracing.tar.Z and gthread.Animations.tar.Z. The first goes to your KSR machine, and the second must go on your graphics workstation. Once you retrieve the files from ftp, uncompress them, and then tar them out: % ftp par.cc.gatech.edu login: ftp password: yourname@yoursite > cd pub > binary > get README > get gthread.KSRtracing.tar.Z > get gthread.Animations.tar.Z > quit % uncompress gthread.KSRtracing.tar.Z % tar xvf gthread.KSRtracing.tar ... From that point, each distribution has a top level README file with instructions on how to proceed. Again, if you need help, just ask. General Information ------------------- Gthreads is implemented using the Polka animation package which is also available via anonymous ftp at the same site as above. If you would like to design your own animation views or explore visualizations of concurrent programs some more, feel free to pick up Polka. In return for the use of this software, all we ask is that you provide us with feedback on what you found good, bad, useful, not useful, and so on. To be put on a mailing list about the system or to just ask questions about Gthreads or Polka, please direct correspondence to John Stasko via the address below. ------------------------------------------------------------------------- John Stasko, Assistant Professor Graphics, Viz., and Usability Center phone: (404) 853-9386 College of Computing fax: (404) 853-0673 Georgia Institute of Technology Internet: stasko@cc.gatech.edu Atlanta, Georgia 30332-0280 uucp: ...!gatech!cc!stasko Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: hum@doe.carleton.ca (Harvey J. Hum) Subject: DSP libraries for iWarp board from intel ? Organization: Carleton University Date: Wed, 5 Jan 1994 21:32:56 GMT Does anyone have any DSP libraries for the intel iWarp board or knows of anyone who uses this board ? I know that Carnegie Mellon University in Pennsylvania has these boards as well. Does anyone have an e-mail for this university ? please e-mail your responses to hum@doe.carleton.ca. thanks in advance for your help. H.Hum Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: rks@csd.unsw.OZ.AU (Russell Standish) Subject: Re: Need help: host/node programming on CM-5 us Reply-To: rks@csd.unsw.OZ.AU Organization: University of New South Wales, Australia References: <1993Dec28.152940.19138@hubcap.clemson.edu> > When I ried to link withh cmmd-ld, it gave a bunch of errors namely _main > multiply defined and some unresolved externals like CMM_enable. > >Could someone please enlighten me regarding use of this linker for PVM program? cmmd-ld is actually a script which calls ld several times and awk. Perhaps the best way to see how it works is examine the script. Hope this helps --- -------------------------------------------------------- Dr. Russell Standish Parallel Programming Consultant System Software Unit, Computer Services Department, Room 1410, Library Tower University of NSW Phone 697 2855 PO Box 1, Kensington, 2033 Fax 662 8665 Australia R.Standish@unsw.edu.au Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: das@ponder.csci.unt.edu (Sajal Das) Subject: CALL FOR PAPERS Date: 6 Jan 1994 00:21:42 GMT Organization: University of North Texas, Denton Summary: Special Issue on Parallel Algorithms and Architectures ******************* * CALL FOR PAPERS * ******************* JOURNAL OF COMPUTER & SOFTWARE ENGINEERING -------------------------------------------- SPECIAL ISSUE on PARALLEL ALGORITHMS & ARCHITECTURES (Tentative Publication Date: January 1995) Due to fundamental physical limitations on processing speeds of sequential computers, the future-generation high performance computing environment will eventually rely entirely on exploiting the inherent parallelism in problems and implementing their solutions on realistic parallel machines. Just as the processing speeds of chips are approaching their physical limits, the need for faster computations is increasing at an even faster rate. For example, ten years ago there was virtually no general-purpose parallel computer available commercially. Now there are several machines, some of which have received wide acceptance due to reasonable cost and attractive performance. The purpose of this special issue is to focus on the desgin and analysis of efficient parallel algorithms and their performance on different parallel architectures. We expect to have a good blend of theory and practice. In addition to theoretical papers on parallel algorithms, case studies and experience reports on applications of these algorithms in real-life problems are especially welcome. Example topics include, but are not limited to, the following: Parallel Algorithms and Applications. Machine Models and Architectures. Communication, Synchronization and Scheduling. Mapping Algorithms on Architectures. Performance Evaluation of Multiprocessor Systems. Parallel Data Structures. Parallel Programming and Software Tools. *********************************************************************** Please submit SEVEN copies of your manuscript to either of the * Guest Editors by May 1, 1994: * * *********************************************************************** Professor Sajal K. Das || Professor Pradip K. Srimani * Department of Computer Science || Department of Computer Science * University of North Texas || Colorado State University * Denton, TX 76203 || Ft. Collins, CO 80523 * Tel: (817) 565-4256, -2799 (fax) || Tel: (303) 491-7097, -6639 (fax) * Email: das@cs.unt.edu || Email: srimani@CS.ColoState.Edu * *********************************************************************** INSTRUCTIONS FOR SUBMITTING PAPERS: Papers should be 20--30 double spaced pages including figures, tables and references. Papers should not have been previously published, nor currently submitted elsewhere for publication. Papers should include a title page containing title, authors' names and affiliations, postal and e-mail addresses, telephone numbers and Fax numbers. Papers should include a 300-word abstract. If you are willing to referee papers for this special issue, please send a note with research interest to either of the guest editors. Approved: parallel@hubcap.clemson.edu Follow-up: comp.parallel Path: bounce-back Newsgroups: comp.parallel From: cappello@cs.ucsb.edu (Peter Cappello) Subject: ASAP'94: Call for papers Date: 6 Jan 1994 02:01:48 GMT Organization: University of California, Santa Barbara A S A P '94 INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC ARRAY PROCESSORS 22-24 August 1994 The Fairmont Hotel San Francisco Sponsored by the IEEE Computer Society ASAP'94 is an international conference on application- specific computing systems. This conference's lineage traces back to the First International Workshop on Systolic Arrays held in Oxford, England, in July 1986, and has con- tinued through the International Conference on Application- Specific Array Processors held in Venice, Italy, in Oct. 1993. Areas for application-specific computing systems are many and varied. Some samples areas follow: CAD tools; com- putational biology, chemistry, geology, pharmacology, phy- sics, and physiology; cryptography; data base, information retrieval, and compression; electronic commerce; high- performance networks; medical equipment; robotics and prosthetics; signal and image processing. Aspects of application-specific computing systems that are of interest include, but are not limited to: - Application-specific architectures - Application-specific fault tolerance strategies - Application-specific test & evaluation strategies - CAD tools for application-specific systems - Design methodology for application-specific systems - Special-purpose systems for fundamental algorithms - Implementation methodology & rapid prototyping - Standard hardware components & software objects - Systems software: languages, compilers, operating systems The conference will present a balanced technical pro- gram covering the theory and practice of application- specific computing systems. Of particular interest are con- tributions that either achieve large performance gains with application-specific computing systems, introduce novel architectural concepts, present formal and practical methods for the specification, design and evaluation of these sys- tems, analyze technology dependencies and the integration of hardware and software components, or describe and evaluate fabricated systems. The conference will feature an opening keynote address, technical presentations, a panel discussion, and poster presentations. One of the poster sessions is reserved for on-going projects and experimental systems. INFORMATION FOR AUTHORS Please send 5 copies of your double-spaced typed manuscript (maximum 5000 words) with an abstract to a Pro- gram Co-Chair. Your submission letter should indicate which of your paper's areas are most relevant to the conference, and which author is responsible for correspondence. Your paper should be unpublished and not under review for any other conference or workshop. The Proceedings will be published by the IEEE Computer Society Press. CALENDAR OF SIGNIFICANT EVENTS 18 Feb. Deadline for receipt of papers. 29 Apr. Notification of authors. 24 Jun. Deadline for receipt of photo-ready paper. 22 Aug. Conference begins. GENERAL CO-CHAIRS Prof. Earl E. Swartzlander, Jr. Prof. Benjamin W. Wah e.swartzlander@compmail.com wah@manip.crhc.uiuc.edu Electrical & Computer Engineering Coordinated Science Lab. University of Texas University of Illinois Austin, TX 78712 1308 West Main Street Urbana, IL 61801 (512) 471-5923 (217) 333-3516 (512) 471-5907 (Fax) (217) 244-7175 (Fax) PROGRAM CO-CHAIRS Prof. Peter Cappello Prof. Robert M. Owens cappello@cs.ucsb.edu owens@cse.psu.edu Computer Science Computer Science & Engineering University of California Pennsylvania State Univ. Santa Barbara, CA 93106 University Park, PA 16802 (805) 893-4383 (814) 865-9505 (805) 893-8553 (Fax) (814) 865-3176 (Fax) EUROPEAN PUBLICITY CHAIR Prof. Vincenzo Piuri e-mail piuri@ipmel1.polimi.it Dept. of Electronics and Information Politecnico di Milano p.za L. da Vinci 32 I-20133 Milano, Italy +39-2-23993606 +39-2-23993411 (Fax) Please forward this Call to all interested parties.